Sample records for source release model

  1. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  2. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  3. RLINE: A Line Source Dispersion Model for Near-Surface Releases

    EPA Science Inventory

    This paper describes the formulation and evaluation of RLINE, a Research LINE source model for near surface releases. The model is designed to simulate mobile source pollutant dispersion to support the assessment of human exposures in near-roadway environments where a significant...

  4. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  5. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  6. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  7. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  8. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  9. Identify source location and release time for pollutants undergoing super-diffusion and decay: Parameter analysis and model evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Sun, HongGuang; Lu, Bingqing; Garrard, Rhiannon; Neupauer, Roseanna M.

    2017-09-01

    Backward models have been applied for four decades by hydrologists to identify the source of pollutants undergoing Fickian diffusion, while analytical tools are not available for source identification of super-diffusive pollutants undergoing decay. This technical note evaluates analytical solutions for the source location and release time of a decaying contaminant undergoing super-diffusion using backward probability density functions (PDFs), where the forward model is the space fractional advection-dispersion equation with decay. Revisit of the well-known MADE-2 tracer test using parameter analysis shows that the peak backward location PDF can predict the tritium source location, while the peak backward travel time PDF underestimates the tracer release time due to the early arrival of tracer particles at the detection well in the maximally skewed, super-diffusive transport. In addition, the first-order decay adds additional skewness toward earlier arrival times in backward travel time PDFs, resulting in a younger release time, although this impact is minimized at the MADE-2 site due to tritium's half-life being relatively longer than the monitoring period. The main conclusion is that, while non-trivial backward techniques are required to identify pollutant source location, the pollutant release time can and should be directly estimated given the speed of the peak resident concentration for super-diffusive pollutants with or without decay.

  10. Refinement of Regional Distance Seismic Moment Tensor and Uncertainty Analysis for Source-Type Identification

    DTIC Science & Technology

    2011-09-01

    a NSS that lies in this negative explosion positive CLVD quadrant due to the large degree of tectonic release in this event that reversed the phase...Mellman (1986) in their analysis of fundamental model Love and Rayleigh wave amplitude and phase for nuclear and tectonic release source terms, and...1986). Estimating explosion and tectonic release source parameters of underground nuclear explosions from Rayleigh and Love wave observations, Air

  11. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  12. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  13. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  15. Atmospheric Modeling of Mars Methane Plumes

    NASA Astrophysics Data System (ADS)

    Mischna, Michael A.; Allen, M.; Lee, S.

    2010-10-01

    We present two complementary methods for isolating and modeling surface source releases of methane in the martian atmosphere. From recent observations, there is strong evidence that periodic releases of methane occur from discrete surface locations, although the exact location and mechanism of release is still unknown. Numerical model simulations with the Mars Weather Research and Forecasting (MarsWRF) general circulation model (GCM) have been applied to the ground-based observations of atmospheric methane by Mumma et al., (2009). MarsWRF simulations reproduce the natural behavior of trace gas plumes in the martian atmosphere, and reveal the development of the plume over time. These results provide constraints on the timing and location of release of the methane plume. Additional detections of methane have been accumulated by the Planetary Fourier Spectrometer (PFS) on board Mars Express. For orbital observations, which generally have higher frequency and resolution, an alternate approach to source isolation has been developed. Drawing from the concept of natural selection within biology, we apply an evolutionary computational model to this problem of isolating source locations. Using genetic algorithms that `reward’ best-fit matches between observations and GCM plume simulations (also from MarsWRF) over many generations, we find that we can potentially isolate source locations to within tens of km, which is within the roving capabilities of future Mars rovers. Together, these methods present viable numerical approaches to restricting the timing, duration and size of methane release events, and can be used for other trace gas plumes on Mars as well as elsewhere in the solar system.

  16. Transport and Dispersion Model Predictions of Elevated Source Tracer Experiments in the Copenhagen Area: Comparisons of Hazard Prediction and Assessment Capability (HPAC) and National Atmospheric Release Advisory Center (NARAC) Emergency Response Model Predictions

    DTIC Science & Technology

    2006-07-01

    Blue --) and NARAC (Red -) for two elevated releases ( MvM 3 and MvM 15) considered in the model-to-model study [2]. MvM 3 was a gas release (SF6...carried out under stable conditions with a boundary layer height of 100 m and release height of 80 m, while MvM 15 was a particle release carried out...release scenarios: MvM 3 at 30 and 60 Minutes and MvM 15 at 120 and 180 minutes. Each release shows significant NARAC underpredictions with

  17. Releasable Asbestos Field Sampler and a Breathing Zone Model for Risk Assessment

    EPA Science Inventory

    Asbestos aerosolization (or releasability) is the potential for asbestos structures to become airborne when the source is disturbed. The source can be naturally occurring asbestos in soil, mine tailings in the soil at brownfield sites, vermiculite attic insulation in indoor envi...

  18. Source term estimation of radioxenon released from the Fukushima Dai-ichi nuclear reactors using measured air concentrations and atmospheric transport modeling.

    PubMed

    Eslinger, P W; Biegalski, S R; Bowyer, T W; Cooper, M W; Haas, D A; Hayes, J C; Hoffman, I; Korpach, E; Yi, J; Miley, H S; Rishel, J P; Ungar, K; White, B; Woods, V T

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout across the northern hemisphere resulting from the Fukushima Dai-ichi Nuclear Power Plant accident in March 2011. Sampling data from multiple International Modeling System locations are combined with atmospheric transport modeling to estimate the magnitude and time sequence of releases of (133)Xe. Modeled dilution factors at five different detection locations were combined with 57 atmospheric concentration measurements of (133)Xe taken from March 18 to March 23 to estimate the source term. This analysis suggests that 92% of the 1.24 × 10(19) Bq of (133)Xe present in the three operating reactors at the time of the earthquake was released to the atmosphere over a 3 d period. An uncertainty analysis bounds the release estimates to 54-129% of available (133)Xe inventory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Spent fuel radionuclide source-term model for assessing spent fuel performance in geological disposal. Part I: Assessment of the instant release fraction

    NASA Astrophysics Data System (ADS)

    Johnson, Lawrence; Ferry, Cécile; Poinssot, Christophe; Lovera, Patrick

    2005-11-01

    A source-term model for the short-term release of radionuclides from spent nuclear fuel (SNF) has been developed. It provides quantitative estimates of the fraction of various radionuclides that are expected to be released rapidly (the instant release fraction, or IRF) when water contacts the UO 2 or MOX fuel after container breaching in a geological repository. The estimates are based on correlation of leaching data for radionuclides with fuel burnup and fission gas release. Extrapolation of the data to higher fuel burnup values is based on examination of data on fuel restructuring, such as rim development, and on fission gas release data, which permits bounding IRF values to be estimated assuming that radionuclide releases will be less than fission gas release. The consideration of long-term solid-state changes influencing the IRF prior to canister breaching is addressed by evaluating alpha self-irradiation enhanced diffusion, which may gradually increase the accumulation of fission products at grain boundaries.

  20. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  1. The role of a detailed aqueous phase source release model in the LANL area G performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vold, E.L.; Shuman, R.; Hollis, D.K.

    1995-12-31

    A preliminary draft of the Performance Assessment for the Los Alamos National Laboratory (LANL) low-level radioactive waste disposal facility at Area G is currently being completed as required by Department of Energy orders. A detailed review of the inventory data base records and the existing models for source release led to the development of a new modeling capability to describe the liquid phase transport from the waste package volumes. Nuclide quantities are sorted down to four waste package release categories for modeling: rapid release, soil, concrete/sludge, and corrosion. Geochemistry for the waste packages was evaluated in terms of the equilibriummore » coefficients, Kds, and elemental solubility limits, Csl, interpolated from the literature. Percolation calculations for the base case closure cover show a highly skewed distribution with an average of 4 mm/yr percolation from the disposal unit bottom. The waste release model is based on a compartment representation of the package efflux, and depends on package size, percolation rate or Darcy flux, retardation coefficient, and moisture content.« less

  2. Phase 2, Determining the releasability of the asbestos fiber from soils and solid matrices.

    EPA Science Inventory

    Factors that affect the releasability or aerosolization of asbestos and related mineral fibers from a variety of sources need to be better understood to allow prediction and modeling of the relative behavior of these fibers. Examples of the sources of concern include soils, roa...

  3. The energy release in earthquakes, and subduction zone seismicity and stress in slabs. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Vassiliou, M. S.

    1983-01-01

    Energy release in earthquakes is discussed. Dynamic energy from source time function, a simplified procedure for modeling deep focus events, static energy estimates, near source energy studies, and energy and magnitude are addressed. Subduction zone seismicity and stress in slabs are also discussed.

  4. Distributed sensor network for local-area atmospheric modeling

    NASA Astrophysics Data System (ADS)

    French, Patrick D.; Lovell, John S.; Seaman, Nelson L.

    2003-09-01

    In the event of a Weapons of Mass Destruction (WMD) chemical or radiological release, quick identification of the nature and source of the release can support efforts to warn, protect and evacuate threatened populations downwind; mitigate the release; provide more accurate plume forecasting; and collect critical transient evidence to help identify the perpetrator(s). Although there are systems available to assist in tracking a WMD release and then predicting where a plume may be traveling, there are no reliable systems available to determine the source location of that release. This would typically require the timely deployment of a remote sensing capability, a grid of expendable air samplers, or a surface sampling plan if the plume has dissipated. Each of these typical solutions has major drawbacks (i.e.: excessive cost, technical feasibility, duration to accomplish, etc...). This paper presents data to support the use of existing rapid-response meteorological modeling coupled with existing transport and diffusion modeling along with a prototype cost-effective situational awareness monitor which would reduce the sensor network requirements while still accomplishing the overall mission of having a 95% probability in converging on a source location within 100 meters.

  5. THE HYDROCARBON SPILL SCREENING MODEL (HSSM), VOLUME 2: THEORETICAL BACKGROUND AND SOURCE CODES

    EPA Science Inventory

    A screening model for subsurface release of a nonaqueous phase liquid which is less dense than water (LNAPL) is presented. The model conceptualizes the release as consisting of 1) vertical transport from near the surface to the capillary fringe, 2) radial spreading of an LNAPL l...

  6. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  7. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Correlation of recent fission product release data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, T.S.; Lorenz, R.A.; Nakamura, T.

    For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less

  9. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Comparison of chlorine and ammonia concentration field trial data with calculated results from a Gaussian atmospheric transport and dispersion model.

    PubMed

    Bauer, Timothy J

    2013-06-15

    The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.

  11. Estimating accidental pollutant releases in the built environment from turbulent concentration signals

    NASA Astrophysics Data System (ADS)

    Ben Salem, N.; Salizzoni, P.; Soulhac, L.

    2017-01-01

    We present an inverse atmospheric model to estimate the mass flow rate of an impulsive source of pollutant, whose position is known, from concentration signals registered at receptors placed downwind of the source. The originality of this study is twofold. Firstly, the inversion is performed using high-frequency fluctuating, i.e. turbulent, concentration signals. Secondly, the inverse algorithm is applied to a dispersion process within a dense urban canopy, at the district scale, and a street network model, SIRANERISK, is adopted. The model, which is tested against wind tunnel experiments, simulates the dispersion of short-duration releases of pollutant in different typologies of idealised urban geometries. Results allow us to discuss the reliability of the inverse model as an operational tool for crisis management and the risk assessments related to the accidental release of toxic and flammable substances.

  12. Computational Toxicology: Application in Environmental Chemicals

    EPA Science Inventory

    This chapter provides an overview of computational models that describe various aspects of the source-to-health effect continuum. Fate and transport models describe the release, transportation, and transformation of chemicals from sources of emission throughout the general envir...

  13. Risk Assessment for Toxic Air Pollutants: A Citizen's Guide

    MedlinePlus

    ... from the source(s). Engineers use either monitors or computer models to estimate the amount of pollutant released ... measure how much of the pollutant is present. Computer models use mathematical equations that represent the processes ...

  14. On numerical model of time-dependent processes in three-dimensional porous heat-releasing objects

    NASA Astrophysics Data System (ADS)

    Lutsenko, Nickolay A.

    2016-10-01

    The gas flows in the gravity field through porous objects with heat-releasing sources are investigated when the self-regulation of the flow rate of the gas passing through the porous object takes place. Such objects can appear after various natural or man-made disasters (like the exploded unit of the Chernobyl NPP). The mathematical model and the original numerical method, based on a combination of explicit and implicit finite difference schemes, are developed for investigating the time-dependent processes in 3D porous energy-releasing objects. The advantage of the numerical model is its ability to describe unsteady processes under both natural convection and forced filtration. The gas cooling of 3D porous objects with different distribution of heat sources is studied using computational experiment.

  15. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  16. Toxic industrial chemical (TIC) source emissions modeling for pressurized liquefied gases

    NASA Astrophysics Data System (ADS)

    Britter, Rex; Weil, Jeffrey; Leung, Joseph; Hanna, Steven

    2011-01-01

    The objective of this article is to report current toxic industrial chemical (TIC) source emissions formulas appropriate for use in atmospheric comprehensive risk assessment models so as to represent state-of-the-art knowledge. The focus is on high-priority scenarios, including two-phase releases of pressurized liquefied gases such as chlorine from rail cars. The total mass released and the release duration are major parameters, as well as the velocity, thermodynamic state, and amount and droplet sizes of imbedded aerosols of the material at the exit of the rupture, which are required as inputs to the subsequent jet and dispersion modeling. Because of the many possible release scenarios that could develop, a suite of model equations has been described. These allow for gas, two-phase or liquid storage and release through ruptures of various types including sharp-edged and "pipe-like" ruptures. Model equations for jet depressurization and phase change due to flashing are available. Consideration of the importance of vessel response to a rupture is introduced. The breakup of the jet into fine droplets and their subsequent suspension and evaporation, or rainout is still a significant uncertainty in the overall modeling process. The recommended models are evaluated with data from various TIC field experiments, in particular recent experiments with pressurized liquefied gases. It is found that there is typically a factor of two error in models compared with research-grade observations of mass flow rates. However, biases are present in models' estimates of the droplet size distributions resulting from flashing releases.

  17. Decision analysis of emergency ventilation and evacuation strategies against suddenly released contaminant indoors by considering the uncertainty of source locations.

    PubMed

    Cai, Hao; Long, Weiding; Li, Xianting; Kong, Lingjuan; Xiong, Shuang

    2010-06-15

    In case hazardous contaminants are suddenly released indoors, the prompt and proper emergency responses are critical to protect occupants. This paper aims to provide a framework for determining the optimal combination of ventilation and evacuation strategies by considering the uncertainty of source locations. The certainty of source locations is classified as complete certainty, incomplete certainty, and complete uncertainty to cover all the possible situations. According to this classification, three types of decision analysis models are presented. A new concept, efficiency factor of contaminant source (EFCS), is incorporated in these models to evaluate the payoffs of the ventilation and evacuation strategies. A procedure of decision-making based on these models is proposed and demonstrated by numerical studies of one hundred scenarios with ten ventilation modes, two evacuation modes, and five source locations. The results show that the models can be useful to direct the decision analysis of both the ventilation and evacuation strategies. In addition, the certainty of the source locations has an important effect on the outcomes of the decision-making. Copyright 2010 Elsevier B.V. All rights reserved.

  18. Reconstruction of Atmospheric Tracer Releases with Optimal Resolution Features: Concentration Data Assimilation

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Turbelin, Gregory; Issartel, Jean-Pierre; Kumar, Pramod; Feiz, Amir Ali

    2015-04-01

    The fast growing urbanization, industrialization and military developments increase the risk towards the human environment and ecology. This is realized in several past mortality incidents, for instance, Chernobyl nuclear explosion (Ukraine), Bhopal gas leak (India), Fukushima-Daichi radionuclide release (Japan), etc. To reduce the threat and exposure to the hazardous contaminants, a fast and preliminary identification of unknown releases is required by the responsible authorities for the emergency preparedness and air quality analysis. Often, an early detection of such contaminants is pursued by a distributed sensor network. However, identifying the origin and strength of unknown releases following the sensor reported concentrations is a challenging task. This requires an optimal strategy to integrate the measured concentrations with the predictions given by the atmospheric dispersion models. This is an inverse problem. The measured concentrations are insufficient and atmospheric dispersion models suffer from inaccuracy due to the lack of process understanding, turbulence uncertainties, etc. These lead to a loss of information in the reconstruction process and thus, affect the resolution, stability and uniqueness of the retrieved source. An additional well known issue is the numerical artifact arisen at the measurement locations due to the strong concentration gradient and dissipative nature of the concentration. Thus, assimilation techniques are desired which can lead to an optimal retrieval of the unknown releases. In general, this is facilitated within the Bayesian inference and optimization framework with a suitable choice of a priori information, regularization constraints, measurement and background error statistics. An inversion technique is introduced here for an optimal reconstruction of unknown releases using limited concentration measurements. This is based on adjoint representation of the source-receptor relationship and utilization of a weight function which exhibits a priori information about the unknown releases apparent to the monitoring network. The properties of the weight function provide an optimal data resolution and model resolution to the retrieved source estimates. The retrieved source estimates are proved theoretically to be stable against the random measurement errors and their reliability can be interpreted in terms of the distribution of the weight functions. Further, the same framework can be extended for the identification of the point type releases by utilizing the maximum of the retrieved source estimates. The inversion technique has been evaluated with the several diffusion experiments, like, Idaho low wind diffusion experiment (1974), IIT Delhi tracer experiment (1991), European Tracer Experiment (1994), Fusion Field Trials (2007), etc. In case of point release experiments, the source parameters are mostly retrieved close to the true source parameters with least error. Primarily, the proposed technique overcomes two major difficulties incurred in the source reconstruction: (i) The initialization of the source parameters as required by the optimization based techniques. The converged solution depends on their initialization. (ii) The statistical knowledge about the measurement and background errors as required by the Bayesian inference based techniques. These are hypothetically assumed in case of no prior knowledge.

  19. Dynamical model for the toroidal sporadic meteors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokorný, Petr; Vokrouhlický, David; Nesvorný, David

    More than a decade of radar operations by the Canadian Meteor Orbit Radar have allowed both young and moderately old streams to be distinguished from the dispersed sporadic background component. The latter has been categorized according to broad radiant regions visible to Earth-based observers into three broad classes: the helion and anti-helion source, the north and south apex sources, and the north and south toroidal sources (and a related arc structure). The first two are populated mainly by dust released from Jupiter-family comets and new comets. Proper modeling of the toroidal sources has not to date been accomplished. Here, wemore » develop a steady-state model for the toroidal source of the sporadic meteoroid complex, compare our model with the available radar measurements, and investigate a contribution of dust particles from our model to the whole population of sporadic meteoroids. We find that the long-term stable part of the toroidal particles is mainly fed by dust released by Halley type (long period) comets (HTCs). Our synthetic model reproduces most of the observed features of the toroidal particles, including the most troublesome low-eccentricity component, which is due to a combination of two effects: particles' ability to decouple from Jupiter and circularize by the Poynting-Robertson effect, and large collision probability for orbits similar to that of the Earth. Our calibrated model also allows us to estimate the total mass of the HTC-released dust in space and check the flux necessary to maintain the cloud in a steady state.« less

  20. Quantitative evaluation of an air-monitoring network using atmospheric transport modeling and frequency of detection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rood, Arthur S.; Sondrup, A. Jeffrey; Ritter, Paul D.

    A methodology to quantify the performance of an air monitoring network in terms of frequency of detection has been developed. The methodology utilizes an atmospheric transport model to predict air concentrations of radionuclides at the samplers for a given release time and duration. Frequency of detection is defined as the fraction of “events” that result in a detection at either a single sampler or network of samplers. An “event” is defined as a release of finite duration that begins on a given day and hour of the year from a facility with the potential to emit airborne radionuclides. Another metricmore » of interest is the network intensity, which is defined as the fraction of samplers in the network that have a positive detection for a given event. The frequency of detection methodology allows for evaluation of short-term releases that include effects of short-term variability in meteorological conditions. The methodology was tested using the U.S. Department of Energy Idaho National Laboratory (INL) Site ambient air monitoring network consisting of 37 low-volume air samplers in 31 different locations covering a 17,630 km 2 region. Releases from six major INL facilities distributed over an area of 1,435 km 2 were modeled and included three stack sources and eight ground-level sources. A Lagrangian Puff air dispersion model (CALPUFF) was used to model atmospheric transport. The model was validated using historical 125Sb releases and measurements. Relevant one-week release quantities from each emission source were calculated based on a dose of 1.9 × 10 –4 mSv at a public receptor (0.01 mSv assuming release persists over a year). Important radionuclides considered include 241Am, 137Cs, 238Pu, 239Pu, 90Sr, and tritium. Results show the detection frequency is over 97.5% for the entire network considering all sources and radionuclides. Network intensities ranged from 3.75% to 62.7%. Evaluation of individual samplers indicated some samplers were poorly situated and add little to the overall effectiveness of the network. As a result, using the frequency of detection methods, optimum sampler placements were simulated that could substantially improve the performance and efficiency of the network.« less

  1. Quantitative evaluation of an air-monitoring network using atmospheric transport modeling and frequency of detection methods

    DOE PAGES

    Rood, Arthur S.; Sondrup, A. Jeffrey; Ritter, Paul D.

    2016-04-01

    A methodology to quantify the performance of an air monitoring network in terms of frequency of detection has been developed. The methodology utilizes an atmospheric transport model to predict air concentrations of radionuclides at the samplers for a given release time and duration. Frequency of detection is defined as the fraction of “events” that result in a detection at either a single sampler or network of samplers. An “event” is defined as a release of finite duration that begins on a given day and hour of the year from a facility with the potential to emit airborne radionuclides. Another metricmore » of interest is the network intensity, which is defined as the fraction of samplers in the network that have a positive detection for a given event. The frequency of detection methodology allows for evaluation of short-term releases that include effects of short-term variability in meteorological conditions. The methodology was tested using the U.S. Department of Energy Idaho National Laboratory (INL) Site ambient air monitoring network consisting of 37 low-volume air samplers in 31 different locations covering a 17,630 km 2 region. Releases from six major INL facilities distributed over an area of 1,435 km 2 were modeled and included three stack sources and eight ground-level sources. A Lagrangian Puff air dispersion model (CALPUFF) was used to model atmospheric transport. The model was validated using historical 125Sb releases and measurements. Relevant one-week release quantities from each emission source were calculated based on a dose of 1.9 × 10 –4 mSv at a public receptor (0.01 mSv assuming release persists over a year). Important radionuclides considered include 241Am, 137Cs, 238Pu, 239Pu, 90Sr, and tritium. Results show the detection frequency is over 97.5% for the entire network considering all sources and radionuclides. Network intensities ranged from 3.75% to 62.7%. Evaluation of individual samplers indicated some samplers were poorly situated and add little to the overall effectiveness of the network. As a result, using the frequency of detection methods, optimum sampler placements were simulated that could substantially improve the performance and efficiency of the network.« less

  2. Monitoring and Predicting the Long Distance Transport of Fusarium graminearum, Causal Agent of Fusarium Head Blight in Wheat and Barley

    NASA Astrophysics Data System (ADS)

    Prussin, Aaron Justin, II

    Fusarium head blight (FHB), caused by Fusarium graminearum , is a serious disease of wheat and barley that has caused several billion dollars in crop losses over the last decade in the United States. Spores of F. graminearum are released from corn and small grain residues left-over from the previous growing season and are transported long distances in the atmosphere before being deposited. Current risk assessment tools consider environmental conditions favorable for disease development, but do not include spore transport. Long distance transport models have been proposed for a number of plant pathogens, but many of these models have not been experimentally validated. In order to predict the atmospheric transport of F. graminearum, the potential source strength ( Qpot) of inoculum must be known. We conducted a series of laboratory and field experiments to estimate Qpot from a field-scale source of inoculum of F. graminearum. Perithecia were generated on artificial (carrot agar) and natural (corn stalk) substrates. Artificial substrate (carrot agar) produced 15+/-0.4 perithecia cm-2, and natural substrate (corn stalk) produced 44+/-2 perithecia cm-2. Individual perithecia were excised from both substrate types and allowed to release ascospores every 24 hours. Perithecia generated from artificial (carrot agar) and natural (corn stalk) substrates released a mean of 104+/-5 and 276+/-16 ascospores, respectively. A volumetric spore trap was placed inside a 3,716 m2 clonal source of inoculum in 2011 and 2012. Results indicated that ascospores were released under field conditions predominantly (>90%) during the night (1900 to 0700 hours). Estimates of Qpot for our field-scale sources of inoculum were approximately 4 billion ascospores per 3,716 m 2. Release-recapture studies were conducted from a clonal field-scale source of F. graminearum in 2011 and 2012. Microsatellites were used to identify the released clone of F. graminearum at distances up to 1 km from the source. Dispersal kernels for field observations were compared to results predicted by a Gaussian dispersal-based spore transport model. In 2011 and 2012, dispersal kernel shape coefficients were similar for both results observed in the field and predicted by the model, with both being dictated by a power law function, indicating that turbulence was the dominant transport factor on the scale we studied (˜ 1 km). Model predictions had a stronger correlation with the number of spores being released when using a time varying q0 emission rate (r= 0.92 in 2011 and r= 0.84 in 2012) than an identical daily pattern q0 emission rate (r= 0.35 in 2011 and r= 0.32 in 2012). The actual numbers of spores deposited were 3 and 2000 times lower than predicted if Qpot were equal to the actual number of spores released in 2011 and 2012, respectively. Future work should address estimating the actual number of spore released from an inoculated field during any given season, to improve prediction accuracy of the model. This work should assist in improving current risk assessment tools for FHB and contribute to the development of early warning systems for the spread of F. graminearum.

  3. Toward a Mechanistic Source Term in Advanced Reactors: Characterization of Radionuclide Transport and Retention in a Sodium Cooled Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David

    A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooledmore » fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide release, the current state of knowledge is extensive, and in most areas may be sufficient. Several knowledge gaps were identified, such as uncertainty in release from molten fuel and availability of thermodynamic data for lanthanides and actinides in liquid sodium. However, the overall findings suggest that high retention rates can be expected within the fuel and primary sodium for all radionuclides other than noble gases.« less

  4. NATIONAL AND REGIONAL AIR AND DEPOSITION MODELING OF STATIONARY AND MOBILE SOURCE EMISSIONS OF DIOXINS USING THE RELMAP MODELING SYSTEM

    EPA Science Inventory

    The purpose of this study is to estimate the atmospheric transport, fate and deposition flux of air releases of CDDs and CDFs from known sources within the continental United States using the Regional Lagrangian Model of Air Pollution (RELMAP). RELMAP is a Lagrangian air model th...

  5. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.

  6. Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source.more » The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.« less

  7. Environmental impact and risk assessments and key factors contributing to the overall uncertainties.

    PubMed

    Salbu, Brit

    2016-01-01

    There is a significant number of nuclear and radiological sources that have contributed, are still contributing, or have the potential to contribute to radioactive contamination of the environment in the future. To protect the environment from radioactive contamination, impact and risk assessments are performed prior to or during a release event, short or long term after deposition or prior and after implementation of countermeasures. When environmental impact and risks are assessed, however, a series of factors will contribute to the overall uncertainties. To provide environmental impact and risk assessments, information on processes, kinetics and a series of input variables is needed. Adding problems such as variability, questionable assumptions, gaps in knowledge, extrapolations and poor conceptual model structures, a series of factors are contributing to large and often unacceptable uncertainties in impact and risk assessments. Information on the source term and the release scenario is an essential starting point in impact and risk models; the source determines activity concentrations and atom ratios of radionuclides released, while the release scenario determine the physico-chemical forms of released radionuclides such as particle size distribution, structure and density. Releases will most often contain other contaminants such as metals, and due to interactions, contaminated sites should be assessed as a multiple stressor scenario. Following deposition, a series of stressors, interactions and processes will influence the ecosystem transfer of radionuclide species and thereby influence biological uptake (toxicokinetics) and responses (toxicodynamics) in exposed organisms. Due to the variety of biological species, extrapolation is frequently needed to fill gaps in knowledge e.g., from effects to no effects, from effects in one organism to others, from one stressor to mixtures. Most toxtests are, however, performed as short term exposure of adult organisms, ignoring sensitive history life stages of organisms and transgenerational effects. To link sources, ecosystem transfer and biological effects to future impact and risks, a series of models are usually interfaced, while uncertainty estimates are seldom given. The model predictions are, however, only valid within the boundaries of the overall uncertainties. Furthermore, the model predictions are only useful and relevant when uncertainties are estimated, communicated and understood. Among key factors contributing most to uncertainties, the present paper focuses especially on structure uncertainties (model bias or discrepancies) as aspects such as particle releases, ecosystem dynamics, mixed exposure, sensitive life history stages and transgenerational effects, are usually ignored in assessment models. Research focus on these aspects should significantly reduce the overall uncertainties in the impact and risk assessment of radioactive contaminated ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Experimental study of the thermal-acoustic efficiency in a long turbulent diffusion-flame burner

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.

    1983-01-01

    A two-year study of noise production in a long tubular burner is described. The research was motivated by an interest in understanding and eventually reducing core noise in gas turbine engines. The general approach is to employ an acoustic source/propagation model to interpret the sound pressure spectrum in the acoustic far field of the burner in terms of the source spectrum that must have produced it. In the model the sources are assumed to be due uniquely to the unsteady component of combustion heat release; thus only direct combustion-noise is considered. The source spectrum is then the variation with frequency of the thermal-acoustic efficiency, defined as the fraction of combustion heat release which is converted into acoustic energy at a given frequency. The thrust of the research was to study the variation of the source spectrum with the design and operating parameters of the burner.

  9. Aquitard contaminant storage and flux resulting from dense nonaqueous phase liquid source zone dissolution and remediation

    EPA Science Inventory

    A one-dimensional diffusion model was used to investigate the effects of dense non-aqueous phase liquid (DNAPL) source zone dissolution and remediation on the storage and release of contaminants from aquitards. Source zone dissolution was represented by a power-law source depleti...

  10. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  11. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions

    NASA Astrophysics Data System (ADS)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A benchmark calculation demonstrates the improvement in agreement of the total inventory of those chemical elements included in the RMC fuel model to an ORIGEN-S calculation. ORIGEN-S is the Oak Ridge isotope generation and depletion computer program. The Gibbs energy minimizer requires a chemical database containing coefficients from which the Gibbs energy of pure compounds, gas and liquid mixtures, and solid solutions can be calculated. The RMC model of irradiated uranium dioxide fuel has been converted into the required format. The Gibbs energy minimizer has been incorporated into a new model of fission-product vaporization from the fuel surface. Calculated release fractions using the new code have been compared to results calculated with SOURCE IST 2.0P11 and to results of tests used in the validation of SOURCE 2.0. The new code shows improvements in agreement with experimental releases for a number of nuclides. Of particular significance is the better agreement between experimental and calculated release fractions for 140La. The improved agreement reflects the inclusion in the RMC model of the solubility of lanthanum (III) oxide (La2O3) in the fuel matrix. Calculated lanthanide release fractions from earlier computer programs were a challenge to environmental qualification analysis of equipment for some accident scenarios. The new prototype computer program would alleviate this concern. Keywords: Nuclear Engineering; Material Science; Thermodynamics; Radioactive Material, Gibbs Energy Minimization, Actinide Generation and Depletion, FissionProduct Generation and Depletion.

  12. Carnivore Translocations and Conservation: Insights from Population Models and Field Data for Fishers (Martes pennanti)

    PubMed Central

    Lewis, Jeffrey C.; Powell, Roger A.; Zielinski, William J.

    2012-01-01

    Translocations are frequently used to restore extirpated carnivore populations. Understanding the factors that influence translocation success is important because carnivore translocations can be time consuming, expensive, and controversial. Using population viability software, we modeled reintroductions of the fisher, a candidate for endangered or threatened status in the Pacific states of the US. Our model predicts that the most important factor influencing successful re-establishment of a fisher population is the number of adult females reintroduced (provided some males are also released). Data from 38 translocations of fishers in North America, including 30 reintroductions, 5 augmentations and 3 introductions, show that the number of females released was, indeed, a good predictor of success but that the number of males released, geographic region and proximity of the source population to the release site were also important predictors. The contradiction between model and data regarding males may relate to the assumption in the model that all males are equally good breeders. We hypothesize that many males may need to be released to insure a sufficient number of good breeders are included, probably large males. Seventy-seven percent of reintroductions with known outcomes (success or failure) succeeded; all 5 augmentations succeeded; but none of the 3 introductions succeeded. Reintroductions were instrumental in reestablishing fisher populations within their historical range and expanding the range from its most-contracted state (43% of the historical range) to its current state (68% of the historical range). To increase the likelihood of translocation success, we recommend that managers: 1) release as many fishers as possible, 2) release more females than males (55–60% females) when possible, 3) release as many adults as possible, especially large males, 4) release fishers from a nearby source population, 5) conduct a formal feasibility assessment, and 6) develop a comprehensive implementation plan that includes an active monitoring program. PMID:22479336

  13. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  14. Short-term emergency response planning and risk assessment via an integrated modeling system for nuclear power plants in complex terrain

    NASA Astrophysics Data System (ADS)

    Chang, Ni-Bin; Weng, Yu-Chi

    2013-03-01

    Short-term predictions of potential impacts from accidental release of various radionuclides at nuclear power plants are acutely needed, especially after the Fukushima accident in Japan. An integrated modeling system that provides expert services to assess the consequences of accidental or intentional releases of radioactive materials to the atmosphere has received wide attention. These scenarios can be initiated either by accident due to human, software, or mechanical failures, or from intentional acts such as sabotage and radiological dispersal devices. Stringent action might be required just minutes after the occurrence of accidental or intentional release. To fulfill the basic functions of emergency preparedness and response systems, previous studies seldom consider the suitability of air pollutant dispersion models or the connectivity between source term, dispersion, and exposure assessment models in a holistic context for decision support. Therefore, the Gaussian plume and puff models, which are only suitable for illustrating neutral air pollutants in flat terrain conditional to limited meteorological situations, are frequently used to predict the impact from accidental release of industrial sources. In situations with complex terrain or special meteorological conditions, the proposing emergency response actions might be questionable and even intractable to decisionmakers responsible for maintaining public health and environmental quality. This study is a preliminary effort to integrate the source term, dispersion, and exposure assessment models into a Spatial Decision Support System (SDSS) to tackle the complex issues for short-term emergency response planning and risk assessment at nuclear power plants. Through a series model screening procedures, we found that the diagnostic (objective) wind field model with the aid of sufficient on-site meteorological monitoring data was the most applicable model to promptly address the trend of local wind field patterns. However, most of the hazardous materials being released into the environment from nuclear power plants are not neutral pollutants, so the particle and multi-segment puff models can be regarded as the most suitable models to incorporate into the output of the diagnostic wind field model in a modern emergency preparedness and response system. The proposed SDSS illustrates the state-of-the-art system design based on the situation of complex terrain in South Taiwan. This system design of SDSS with 3-dimensional animation capability using a tailored source term model in connection with ArcView® Geographical Information System map layers and remote sensing images is useful for meeting the design goal of nuclear power plants located in complex terrain.

  15. TRANSPORT, FATE AND RISK IMPLICATIONS OF ENVIRONMENTALLY ACCEPTABLE ENDPOINT DECISIONS

    EPA Science Inventory

    The second and third year project goals are the following: Continue to develop and finalize the expected source zone module incorporating slow release and finalize the contaminated soil screening model. Chemical rate of release data will be obtained and used with t...

  16. Contributions of solar wind and micrometeoroids to molecular hydrogen in the lunar exosphere

    NASA Astrophysics Data System (ADS)

    Hurley, Dana M.; Cook, Jason C.; Retherford, Kurt D.; Greathouse, Thomas; Gladstone, G. Randall; Mandt, Kathleen; Grava, Cesare; Kaufmann, David; Hendrix, Amanda; Feldman, Paul D.; Pryor, Wayne; Stickle, Angela; Killen, Rosemary M.; Stern, S. Alan

    2017-02-01

    We investigate the density and spatial distribution of the H2 exosphere of the Moon assuming various source mechanisms. Owing to its low mass, escape is non-negligible for H2. For high-energy source mechanisms, a high percentage of the released molecules escape lunar gravity. Thus, the H2 spatial distribution for high-energy release processes reflects the spatial distribution of the source. For low energy release mechanisms, the escape rate decreases and the H2 redistributes itself predominantly to reflect a thermally accommodated exosphere. However, a small dependence on the spatial distribution of the source is superimposed on the thermally accommodated distribution in model simulations, where density is locally enhanced near regions of higher source rate. For an exosphere accommodated to the local surface temperature, a source rate of 2.2 g s-1 is required to produce a steady state density at high latitude of 1200 cm-3. Greater source rates are required to produce the same density for more energetic release mechanisms. Physical sputtering by solar wind and direct delivery of H2 through micrometeoroid bombardment can be ruled out as mechanisms for producing and liberating H2 into the lunar exosphere. Chemical sputtering by the solar wind is the most plausible as a source mechanism and would require 10-50% of the solar wind H+ inventory to be converted to H2 to account for the observations.

  17. Contributions of Solar Wind and Micrometeoroids to Molecular Hydrogen in the Lunar Exosphere

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Cook, Jason C.; Retherford, Kurt D.; Greathouse, Thomas; Gladstone, G. Randall; Mandt, Kathleen; Grava, Cesare; Kaufmann, David; Hendrix, Amanda; Feldman, Paul D.; hide

    2016-01-01

    We investigate the density and spatial distribution of the H2 exosphere of the Moon assuming various source mechanisms. Owing to its low mass, escape is non-negligible for H2. For high-energy source mechanisms, a high percentage of the released molecules escape lunar gravity. Thus, the H2 spatial distribution for high-energy release processes reflects the spatial distribution of the source. For low energy release mechanisms, the escape rate decreases and the H2 redistributes itself predominantly to reflect a thermally accommodated exosphere. However, a small dependence on the spatial distribution of the source is superimposed on the thermally accommodated distribution in model simulations, where density is locally enhanced near regions of higher source rate. For an exosphere accommodated to the local surface temperature, a source rate of 2.2 g s-1 is required to produce a steady state density at high latitude of 1200 cm-3. Greater source rates are required to produce the same density for more energetic release mechanisms. Physical sputtering by solar wind and direct delivery of H2 through micrometeoroid bombardment can be ruled out as mechanisms for producing and liberating H2 into the lunar exosphere. Chemical sputtering by the solar wind is the most plausible as a source mechanism and would require 10-50 of the solar wind H+ inventory to be converted to H2 to account for the observations.

  18. Quantification of methane fluxes from industrial sites using a combination of a tracer release method and a Gaussian model

    NASA Astrophysics Data System (ADS)

    Ars, S.; Broquet, G.; Yver-Kwok, C.; Wu, L.; Bousquet, P.; Roustan, Y.

    2015-12-01

    Greenhouse gas (GHG) concentrations keep on increasing in the atmosphere since industrial revolution. Methane (CH4) is the second most important anthropogenic GHG after carbon dioxide (CO2). Its sources and sinks are nowadays well identified however their relative contributions remain uncertain. The industries and the waste treatment emit an important part of the anthropogenic methane that is difficult to quantify because the sources are fugitive and discontinuous. A better estimation of methane emissions could help industries to adapt their mitigation's politic and encourage them to install methane recovery systems in order to reduce their emissions while saving money. Different methods exist to quantify methane emissions. Among them is the tracer release method consisting in releasing a tracer gas near the methane source at a well-known rate and measuring both their concentrations in the emission plume. The methane rate is calculated using the ratio of methane and tracer concentrations and the emission rate of the tracer. A good estimation of the methane emissions requires a good differentiation between the methane actually emitted by the site and the methane from the background concentration level, but also a good knowledge of the sources distribution over the site. For this purpose, a Gaussian plume model is used in addition to the tracer release method to assess the emission rates calculated. In a first step, the data obtained for the tracer during a field campaign are used to tune the model. Different model's parameterizations have been tested to find the best representation of the atmospheric dispersion conditions. Once these parameters are set, methane emissions are estimated thanks to the methane concentrations measured and a Bayesian inversion. This enables to adjust the position and the emission rate of the different methane sources of the site and remove the methane background concentration.

  19. The project MOHAVE tracer study: study design, data quality, and overview of results

    NASA Astrophysics Data System (ADS)

    Green, Mark C.

    In the winter and summer of 1992, atmospheric tracer studies were conducted in support of project MOHAVE, a visibility study in the southwestern United States. The primary goal of project MOHAVE is to determine the effects of the Mohave power plant and other sources upon visibility at Grand Canyon National Park. Perfluorocarbon tracers (PFTs) were released from the Mohave power plant and other locations and monitored at about 30 sites. The tracer data are being used for source attribution analysis and for evaluation of transport and dispersion models and receptor models. Collocated measurements showed the tracer data to be of high quality and suitable for source attribution analysis and model evaluation. The results showed strong influences of channeling by the Colorado River canyon during both winter and summer. Flow from the Mohave power plant was usually to the south, away from the Grand Canyon in winter and to the northeast, toward the Grand Canyon in summer. Tracer released at Lake Powell in winter was found to often travel downstream through the entire length of the Grand Canyon. Data from summer tracer releases in southern California demonstrated the existence of a convergence zone in the western Mohave Desert.

  20. Impact of meteorological inflow uncertainty on tracer transport and source estimation in urban atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip

    2015-08-08

    Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less

  1. MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID

    EPA Science Inventory

    Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...

  2. A constrained robust least squares approach for contaminant release history identification

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.

    2006-04-01

    Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.

  3. MULTI-MEDIA MODELING : RESEARCH AND DEVELOPMENT

    EPA Science Inventory

    Developed by ORD in collaboration with OSW, the Multimedia, Multi-pathway, Multi-receptor Risk Assessment (3MRA) national risk assessment methodology is designed to assess risks at sites containing source(s) of contamination that may release contaminants to the environment. Or...

  4. Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.

    PubMed

    Liu, X; Zhai, Z

    2007-12-01

    Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.

  5. Historical and Future Trends in Global Source-receptor Relationships of Mercury

    NASA Astrophysics Data System (ADS)

    Chen, L.; Zhang, W.; Wang, X.

    2017-12-01

    Growing concerns about the risk associated with increasing environmental Mercury (Hg) levels have resulted in a focus on the relationships between intercontinental emitted and accumulated Hg. We use a global biogeochemical Hg model with eight continental regions and a global ocean to evaluate the legacy impacts of historical anthropogenic releases (2000 BC to 2008 AD) on global source-receptor relationships of Hg. The legacy impacts of historical anthropogenic releases are confirmed to be significant on the source-receptor relationships according to our results. Historical anthropogenic releases from Asia account for 8% of total soil Hg in North America, which is smaller than the proportion ( 17%) from previous studies. The largest contributors to the global oceanic Hg are historical anthropogenic releases from North America (26%), Asia (16%), Europe (14%) and South America (14%). Although anthropogenic releases from Asia have exceeded North America since the 1970s, source contributions to global Hg receptors from Asia have not exceeded North America so far. Future projections indicate that if Hg emissions are not effectively controlled, Asia will exceed North America as the largest contributor to the global ocean in 2019 and this has a long-term adverse impact on the future environment. For the Arctic Ocean, historical anthropogenic release from North America contributes most to the oceanic Hg reservoir and future projections reveal that the legacy impacts of historical releases from mid-latitudes would lead to the potential of rising Hg in the Arctic Ocean in the future decades, which calls for more effective Hg controls on mid-latitude releases.

  6. Historical and future trends in global source-receptor relationships of mercury.

    PubMed

    Chen, Long; Zhang, Wei; Zhang, Yanxu; Tong, Yindong; Liu, Maodian; Wang, Huanhuan; Xie, Han; Wang, Xuejun

    2018-01-01

    Growing concern about the risk associated with increasing environmental mercury (Hg) concentrations has resulted in a focus on the relationships between intercontinental emitted and accumulated Hg. We use a global biogeochemical Hg model with 8 continental regions and a global ocean to evaluate the legacy impacts of historical anthropogenic releases (2000BCE to 2008AD) on global source-receptor relationships of Hg. Legacy impacts of historical anthropogenic releases are confirmed to be significant on the source-receptor relationships according to our results. Historical anthropogenic releases from Asia account for 8% of total soil Hg in North America, which is smaller than the proportion (~17%) from previous studies. The largest contributors to the global oceanic Hg are historical anthropogenic releases from North America (26%), Asia (16%), Europe (14%) and South America (14%). Although anthropogenic releases from Asia have exceeded North America since the 1970s, source contributions to global Hg receptors from Asia have not exceeded North America so far. Future projections indicate that if Hg emissions are not effectively controlled, Asia will exceed North America as the largest contributor to the global ocean in 2019 and this has a long-term adverse impact on the future environment. For the Arctic Ocean, historical anthropogenic release from North America contributes most to the oceanic Hg reservoir and future projections reveal that the legacy impacts of historical releases from mid-latitudes would lead to the potential of rising Hg in the Arctic Ocean in the future decades, which calls for more effective Hg controls on mid-latitude releases. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Soundscapes

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Soundscapes Michael B. Porter and Laurel J. Henderson...hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on commercial...modeling of the soundscape due to noise involves running an acoustic model for a grid of source positions over latitude and longitude. Typically

  8. A Methodology for the Integration of a Mechanistic Source Term Analysis in a Probabilistic Framework for Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew

    GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level, the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of PRA methodologies to conduct a mechanistic source term (MST) analysis for event sequences that could result in the release ofmore » radionuclides. The MST analysis seeks to realistically model and assess the transport, retention, and release of radionuclides from the reactor to the environment. The MST methods developed during this project seek to satisfy the requirements of the Mechanistic Source Term element of the ASME/ANS Non-LWR PRA standard. The MST methodology consists of separate analysis approaches for risk-significant and non-risk significant event sequences that may result in the release of radionuclides from the reactor. For risk-significant event sequences, the methodology focuses on a detailed assessment, using mechanistic models, of radionuclide release from the fuel, transport through and release from the primary system, transport in the containment, and finally release to the environment. The analysis approach for non-risk significant event sequences examines the possibility of large radionuclide releases due to events such as re-criticality or the complete loss of radionuclide barriers. This paper provides details on the MST methodology, including the interface between the MST analysis and other elements of the PRA, and provides a simplified example MST calculation for a sodium fast reactor.« less

  9. Preparation and Physicochemical Evaluation of Controlled-release Carbon Source Tablet for Groundwater in situ Denitrification

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Kang, J. H.; Yeum, Y.; Han, K. J.; Kim, D. W.; Park, C. W.

    2015-12-01

    Nitric nitrogen could be the one of typical pollution source such asNO3-through domestic sewage, livestock and agricultural wastewater. Resident microflorain aquifer has known to remove the nitric nitrogen spontaneously following the denitration process with the carbon source (CS) as reactant. However, it could be reacted very slowly with the rack of CS and there have been some studies for controlled addition of CS (Ref #1-3). The aim of this study was to prepare the controlled-release carbon source (CR-CS) tablet and to evaluate in vitro release profile for groundwater in situ denitrification. CR-CS tablet could be manufactured by direct compression method using hydraulic laboratory press (Caver® 3850) with 8 mm rounded concave punch/ die.Seven kinds of CR-CS tablet were prepared to determine the nature of the additives and their ratio such as sodium silicate, dicalcium phosphate, bentonite and sand#8.For each formulation, the LOD% and flowability of pre-mixed powders and the hardness of compressed tablets were analyzed. In vitro release study was performed to confirm the dissolution profiles following the USP Apparatus 2 method with Distilled water of 900mL, 20 °C. As a result, for each lubricated powders, they were compared in terms of ability to give an acceptable dry pre-mixed powder for tableting process. The hardness of the compressed tablets is acceptable whatever the formulations tested. After in vitro release study, it could confirm that the different formulations of CR-CS tablet have a various release rate patterns, which could release 100% at 3 hrs, 6 hrs and 12 hrs. The in vitro dissolution profiles were in good correlation of Higuchi release kinetic model. In conclusion, this study could be used as a background for development and evaluation of the controlled-release carbon source (CR-CS) tablet for the purification of groundwater following the in situ denitrification.

  10. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  11. Location identification for indoor instantaneous point contaminant source by probability-based inverse Computational Fluid Dynamics modeling.

    PubMed

    Liu, X; Zhai, Z

    2008-02-01

    Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.

  12. Use of MODIS Satellite Images and an Atmospheric Dust Transport Model to Evaluate Juniperus spp. Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Myers, O. B.; Budge, A. M.; hide

    2011-01-01

    Pollen can be transported great distances. Van de Water et. al. reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen release will be estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observational records of pollen release timing and quantities will be used as verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  13. Reactive transport model for bioremediation of nitrate using fumarate in groundwater system: verification and field application

    NASA Astrophysics Data System (ADS)

    Lee, S.; Yeo, I. W.; Yeum, Y.; Kim, Y.

    2016-12-01

    Previous studies showed that groundwater of rural areas in Korea is often contaminated with nitrate highly exceeding the drinking water standard of 10 mg/L (NO3-N), which poses a major threat in human and livestock health. In-situ bioremediation method has been developed to reduce high nitrate-nitrogen concentration in groundwater using slowly released encapsulated carbon source. Collaborative research of this study revealed that fumarate was found to be a very effective carbon source in terms of cost and nitrate reduction against formate, propionate, and lactate. For reactive transport modeling of the bioremediation of nitrate using fumarate, the BTEX module of RT3D incorporated in GMS, a commercial groundwater modeling software developed by AQUAVEO, was adopted, where BTEX was replaced with fumarate as a carbon source. Column tests were carried out to determine transport and reaction parameters for numerical modeling such as dispersity and first order degradation rate of nitrate by fumarate. The calibration of the numerical model against column tests strongly indicated that nitrate, known to be not reactive in groundwater system, appeared to be retarded due to sorption by fumarate. The calibrated model was tested for field-scale application to the composting facility in Gimje, Korea. The numerical results showed that the model could simulate the nitrate reduction by fumarate in field scale groundwater system. The reactive transport model for nitrate can be used as a tool for optimum design of in-situ nitrate bioremediation system, such as released depth and amount of fumarate and the spacing of wells that encapsulated fumarate is released through.

  14. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  15. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  16. Anthropogenic 129I in the atmosphere: overview over major sources, transport processes and deposition pattern.

    PubMed

    Reithmeier, H; Lazarev, V; Rühm, W; Nolte, E

    2010-10-01

    Wet and, to a lesser extent, dry deposition of atmospheric (129)I are known to represent the dominating processes responsible for (129)I in continental environmental samples that are remote from (129)I sources and not directly influenced by any liquid (129)I release of nuclear installations. Up to now, however, little is known about the major emitters and the related global deposition pattern of (129)I. In this work an overview over major sources of (129)I is given, and hitherto unknown time-dependent releases from these were estimated. Total gaseous (129)I releases from the US and former Soviet reprocessing facilities Hanford, Savannah River, Mayak, Seversk and Zheleznogorsk were found to have been 0.53, 0.27, 1.05, 0.23 and 0.14TBq, respectively. These facilities were thus identified as major airborne (129)I emitters. The global deposition pattern due to the (129)I released, depending on geographic latitude and longitude, and on time was studied using a box model describing the global atmospheric transport and deposition of (129)I. The model predictions are compared to (129)I concentrations measured by means of Accelerator Mass Spectrometry (AMS) in water samples that were collected from various lakes in Asia, Africa, America and New Zealand, and to published values. As a result, both pattern and temporal evolution of (129)I deposition values measured in and calculated for different types of environmental samples are, in general, in good agreement. This supports our estimate on atmospheric (129)I releases and the considered substantial transport and deposition mechanisms in our model calculations. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Measurements of scalar released from point sources in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.

    2017-04-01

    Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.

  18. Controlled-release Hydrogen Peroxide for On-site Treatment of Organic Pollutants in Urban Storm Runoff

    NASA Astrophysics Data System (ADS)

    Lee, E.; Sun, S.; Kim, Y.

    2011-12-01

    Nonpoint source (NPS) pollutants are the remaining cause of the environment problems, significantly impairing the hydrologic and biologic function of urban water systems and human health. Managing the NPS loads to urban aquatic systems remains a challenge because of ubiquitous contaminant sources and large pollutants loads in the first flush. Best management practices (BMPs) exist for reducing the NPS pollutants in urban storm waters, but the remedial efficiencies of these passive schemes are unpredictable. This study aims to develop a controlled-release system as part of an in situ chemical oxidation scheme designed for on-site treatment of organic pollutants in urban runoff. Controlled-release hydrogen peroxide (CR-HP) solids were manufactured by dispersing fine sodium percarbonate granules in paraffin wax matrices. Release kinetics and treatment efficiencies of CR-HP for BTEX and MTBE were investigated through a series of column tests. Release data indicated that the CR-HP could continually release hydrogen peroxide (H2O2) in flowing water at controlled rates over 276-1756 days, and the release rates could be adjusted by changing the mixing ratios of sodium percarbonate and wax matrices. Additional column tests and model calculations demonstrated that CR-HP/UV systems can provide low-cost, target-specific, and persistent source of oxidants for efficient treatment of organic compounds in urban storm runoff.

  19. New Open-Source Version of FLORIS Released | News | NREL

    Science.gov Websites

    New Open-Source Version of FLORIS Released New Open-Source Version of FLORIS Released January 26 , 2018 National Renewable Energy Laboratory (NREL) researchers recently released an updated open-source simplified and documented. Because of the living, open-source nature of the newly updated utility, NREL

  20. You Can Run, But You Can't Hide Juniper Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.

    2013-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We are modified the DREAM model to incorporate pollen transport. Pollen release is estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observational records of pollen release timing and quantities are used as verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  1. Biomass offsets little or none of permafrost carbon release from soils, streams, and wildfire: an expert assessment

    USGS Publications Warehouse

    Benjamin W. Abbott,; Jeremy B. Jones,; Edward A.G. Schuur,; F.S. Chapin, III; Bowden, William B.; M. Syndonia Bret-Harte,; Howard E. Epstein,; Michael D. Flannigan,; Tamara K. Harms,; Teresa N. Hollingsworth,; Mack, Michelle C.; McGuire, A. David; Susan M. Natali,; Adrian V. Rocha,; Tank, Suzanne E.; Merrit R. Turetsky,; Jorien E. Vonk,; Wickland, Kimberly P.; Aiken, George R.

    2016-01-01

    As the permafrost region warms, its large organic carbon pool will be increasingly vulnerable to decomposition, combustion, and hydrologic export. Models predict that some portion of this release will be offset by increased production of Arctic and boreal biomass; however, the lack of robust estimates of net carbon balance increases the risk of further overshooting international emissions targets. Precise empirical or model-based assessments of the critical factors driving carbon balance are unlikely in the near future, so to address this gap, we present estimates from 98 permafrost-region experts of the response of biomass, wildfire, and hydrologic carbon flux to climate change. Results suggest that contrary to model projections, total permafrost-region biomass could decrease due to water stress and disturbance, factors that are not adequately incorporated in current models. Assessments indicate that end-of-the-century organic carbon release from Arctic rivers and collapsing coastlines could increase by 75% while carbon loss via burning could increase four-fold. Experts identified water balance, shifts in vegetation community, and permafrost degradation as the key sources of uncertainty in predicting future system response. In combination with previous findings, results suggest the permafrost region will become a carbon source to the atmosphere by 2100 regardless of warming scenario but that 65%–85% of permafrost carbon release can still be avoided if human emissions are actively reduced.

  2. Framework for Risk Analysis in Multimedia Environmental Systems: Modeling Individual Steps of a Risk Assessment Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj; Castleton, Karl J.; Hoopes, Bonnie L.

    2004-06-01

    The study of the release and effects of chemicals in the environment and their associated risks to humans is central to public and private decision making. FRAMES 1.X, Framework for Risk Analysis in Multimedia Environmental Systems, is a systems modeling software platform, developed by PNNL, Pacific Northwest National Laboratory, that helps scientists study the release and effects of chemicals on a source to outcome basis, create environmental models for similar risk assessment and management problems. The unique aspect of FRAMES is to dynamically introduce software modules representing individual components of a risk assessment (e.g., source release of contaminants, fate andmore » transport in various environmental media, exposure, etc.) within a software framework, manipulate their attributes and run simulations to obtain results. This paper outlines the fundamental constituents of FRAMES 2.X, an enhanced version of FRAMES 1.X, that greatly improve the ability of the module developers to “plug” their self-developed software modules into the system. The basic design, the underlying principles and a discussion of the guidelines for module developers are presented.« less

  3. GIS Modeling of Air Toxics Releases from TRI-Reporting and Non-TRI-Reporting Facilities: Impacts for Environmental Justice

    PubMed Central

    Dolinoy, Dana C.; Miranda, Marie Lynn

    2004-01-01

    The Toxics Release Inventory (TRI) requires facilities with 10 or more full-time employees that process > 25,000 pounds in aggregate or use > 10,000 pounds of any one TRI chemical to report releases annually. However, little is known about releases from non-TRI-reporting facilities, nor has attention been given to the very localized equity impacts associated with air toxics releases. Using geographic information systems and industrial source complex dispersion modeling, we developed methods for characterizing air releases from TRI-reporting as well as non-TRI-reporting facilities at four levels of geographic resolution. We characterized the spatial distribution and concentration of air releases from one representative industry in Durham County, North Carolina (USA). Inclusive modeling of all facilities rather than modeling of TRI sites alone significantly alters the magnitude and spatial distribution of modeled air concentrations. Modeling exposure receptors at more refined levels of geographic resolution reveals localized, neighborhood-level exposure hot spots that are not apparent at coarser geographic scales. Multivariate analysis indicates that inclusive facility modeling at fine levels of geographic resolution reveals exposure disparities by income and race. These new methods significantly enhance the ability to model air toxics, perform equity analysis, and clarify conflicts in the literature regarding environmental justice findings. This work has substantial implications for how to structure TRI reporting requirements, as well as methods and types of analysis that will successfully elucidate the spatial distribution of exposure potentials across geographic, income, and racial lines. PMID:15579419

  4. UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS

    EPA Science Inventory

    While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...

  5. Survival of mountain quail translocated from two distinct source populations

    USGS Publications Warehouse

    Troy, Ronald J.; Coates, Peter S.; Connelly, John W.; Gillette, Gifford; Delehanty, David J.

    2013-01-01

    Translocation of mountain quail (Oreortyx pictus) to restore viable populations to their former range has become a common practice. Because differences in post-release vital rates between animals from multiple source populations has not been well studied, wildlife and land managers may arbitrarily choose the source population or base the source population on immediate availability when planning translocation projects. Similarly, an understanding of the optimal proportion of individuals from different age and sex classes for translocation would benefit translocation planning. During 2006 and 2007, we captured and translocated 125 mountain quail from 2 ecologically distinct areas: 38 from southern California and 87 from southwestern Oregon. We released mountain quail in the Bennett Hills of south-central Idaho. We radio-marked and monitored a subsample of 58 quail and used them for a 2-part survival analysis. Cumulative survival probability was 0.23 ± 0.05 (SE) at 150 days post-release. We first examined an a priori hypothesis (model) that survival varied between the 2 distinct source populations. We found that source population did not explain variation in survival. This result suggests that wildlife managers have flexibility in selecting source populations for mountain quail translocation efforts. In a post hoc examination, we pooled the quail across source populations and evaluated differences in survival probabilities between sex and age classes. The most parsimonious model indicated that adult male survival was substantially less than survival rates of other mountain quail age and sex classes (i.e., interaction between sex and age). This result suggests that translocation success could benefit by translocating yearling males rather than adult males, perhaps because adult male breeding behavior results in vulnerability to predators

  6. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (Modèle Lagrangien de Dispersion de Particules d'ordre zéro: MLDP0, Hybrid Single Particle Lagrangian Integrated Trajectory Model: HYSPLIT, and Met Office's Numerical Atmospheric-dispersion Modelling Environment: NAME) for regional and global calculations, and the calculated results showed good agreement with observed air concentration and surface deposition of 137Cs in eastern Japan.

  7. Using Atmospheric Dispersion Theory to Inform the Design of a Short-lived Radioactive Particle Release Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rishel, Jeremy P.; Keillor, Martin E.; Arrigo, Leah M.

    2016-01-01

    Atmospheric dispersion theory can be used to predict ground deposition of particulates downwind of a radionuclide release. This paper utilizes standard formulations found in Gaussian plume models to inform the design of an experimental release of short-lived radioactive particles into the atmosphere. Specifically, a source depletion algorithm is used to determine the optimum particle size and release height that maximizes the near-field deposition while minimizing the both the required source activity and the fraction of activity lost to long-distance transport. The purpose of the release is to provide a realistic deposition pattern that might be observed downwind of a small-scalemore » vent from an underground nuclear explosion. The deposition field will be used, in part, to investigate several techniques of gamma radiation survey and spectrometry that could be utilized by an On-Site Inspection team under the verification regime of the Comprehensive Nuclear-Test-Ban Treaty.« less

  8. Methane and Environmental Change during the Paleocene-Eocene Thermal Maximum (PETM): Modeling the PETM Onset as a Two-stage Event

    NASA Technical Reports Server (NTRS)

    Carozza, David A.; Mysak, Lawrence A.; Schmidt, Gavin A.

    2011-01-01

    An atmospheric CH4 box model coupled to a global carbon cycle box model is used to constrain the carbon emission associated with the PETM and assess the role of CH4 during this event. A range of atmospheric and oceanic emission scenarios representing different amounts, rates, and isotopic signatures of emitted carbon are used to model the PETM onset. The first 3 kyr of the onset, a pre-isotope excursion stage, is simulated by the atmospheric release of 900 to 1100 Pg C CH4 with a delta C-13 of -22 to - 30 %. For a global average warming of 3 deg C, a release of CO2 to the ocean and CH4 to the atmosphere totalling 900 to 1400 Pg C, with a delta C-13 of -50 to -60%, simulates the subsequent 1 -kyr isotope excursion stage. To explain the observations, the carbon must have been released over at most 500 years. The first stage results cannot be associated with any known PETM hypothesis. However, the second stage results are consistent with a methane hydrate source. More than a single source of carbon is required to explain the PETM onset.

  9. Towards the operational estimation of a radiological plume using data assimilation after a radiological accidental atmospheric release

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Vira, Julius; Bocquet, Marc; Sofiev, Mikhail; Saunier, Olivier

    2011-06-01

    In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is required by the decision makers for the preparation of adequate countermeasures. The accuracy of the forecast plume is highly dependent on the source term estimation. On several academic test cases, including real data, inverse modelling and data assimilation techniques were proven to help in the assessment of the source term. In this paper, a semi-automatic method is proposed for the sequential reconstruction of the plume, by implementing a sequential data assimilation algorithm based on inverse modelling, with a care to develop realistic methods for operational risk agencies. The performance of the assimilation scheme has been assessed through the intercomparison between French and Finnish frameworks. Two dispersion models have been used: Polair3D and Silam developed in two different research centres. Different release locations, as well as different meteorological situations are tested. The existing and newly planned surveillance networks are used and realistically large multiplicative observational errors are assumed. The inverse modelling scheme accounts for strong error bias encountered with such errors. The efficiency of the data assimilation system is tested via statistical indicators. For France and Finland, the average performance of the data assimilation system is strong. However there are outlying situations where the inversion fails because of a too poor observability. In addition, in the case where the power plant responsible for the accidental release is not known, robust statistical tools are developed and tested to discriminate candidate release sites.

  10. Protein Data Bank depositions from synchrotron sources.

    PubMed

    Jiang, Jiansheng; Sweet, Robert M

    2004-07-01

    A survey and analysis of Protein Data Bank (PDB) depositions from international synchrotron radiation facilities, based on the latest released PDB entries, are reported. The results (http://asdp.bnl.gov/asda/Libraries/) show that worldwide, every year since 1999, more than 50% of the deposited X-ray structures have used synchrotron facilities, reaching 75% by 2003. In this web-based database, all PDB entries among individual synchrotron beamlines are archived, synchronized with the weekly PDB release. Statistics regarding the quality of experimental data and the refined model for all structures are presented, and these are analysed to reflect the impact of synchrotron sources. The results confirm the common impression that synchrotron sources extend the size of structures that can be solved with equivalent or better quality than home sources.

  11. Impacts of blending ground, surface, and saline waters on lead release in drinking water distribution systems.

    PubMed

    Tang, Zhijian; Hong, Seungkwan; Xiao, Weizhong; Taylor, James

    2006-03-01

    The impacts of distribution water quality changes caused by blending different source waters on lead release from corrosion loops containing small lead coupons were investigated in a pilot distribution study. The 1-year pilot study demonstrated that lead release to drinking water increased as chlorides increased and sulfates decreased. Silica and calcium inhibited lead release to a lesser degree than sulfates. An additional 3-month field study isolated and verified the effects of chlorides and sulfates on lead release. Lead release decreased with increasing pH and increasing alkalinity during the 1-year pilot study; however, the effects of pH and alkalinity on lead release, were not clearly elucidated due to confounding effects. A statistical model was developed using nonlinear regression, which showed that lead release increased with increasing chlorides, alkalinity and temperature, and decreased with increasing pH and sulfates. The model indicated that primary treatment processes such as enhanced coagulation and RO (reverse osmosis membrane) were related to lead release by water quality. Chlorides are high in RO-finished water and increase lead release, while sulfates are high following enhanced coagulation and decrease lead release.

  12. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Use of MODIS Satellite Images and an Atmospheric Dust Transport Model to Evaluate Juniperus spp. Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.

    2011-01-01

    Pollen can be transported great distances. Van de Water et. al. reported Juniperus spp. pollen was transported 200-600 km. Hence local obse rvations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data produ cts to identify source regions and quantities of dust. We are modifyi ng the DREAM model to incorporate pollen transport. Pollen release wi ll be estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observations records of pollen release timing and quantities will be used as verification. This information will be used to support the Centers for Disease Control and Prevention?s Nat ional Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  14. Use of MODIS Satellite Images and an Atmospheric Dust Transport Model To Evaluate Juniperus spp. Pollen Phenology and Dispersal

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, Estelle; Huete, Alfredo; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Myers, O. B.; Budge, A. M.; hide

    2011-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen release will be estimated based on MODIS derived phenology of Juniperus spp. communities. Ground based observational records of pollen release timing and quantities will be used as verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  15. Evaluation of near field atmospheric dispersion around nuclear facilities using a Lorentzian distribution methodology.

    PubMed

    Hawkley, Gavin

    2014-12-01

    Atmospheric dispersion modeling within the near field of a nuclear facility typically applies a building wake correction to the Gaussian plume model, whereby a point source is modeled as a plane source. The plane source results in greater near field dilution and reduces the far field effluent concentration. However, the correction does not account for the concentration profile within the near field. Receptors of interest, such as the maximally exposed individual, may exist within the near field and thus the realm of building wake effects. Furthermore, release parameters and displacement characteristics may be unknown, particularly during upset conditions. Therefore, emphasis is placed upon the need to analyze and estimate an enveloping concentration profile within the near field of a release. This investigation included the analysis of 64 air samples collected over 128 wk. Variables of importance were then derived from the measurement data, and a methodology was developed that allowed for the estimation of Lorentzian-based dispersion coefficients along the lateral axis of the near field recirculation cavity; the development of recirculation cavity boundaries; and conservative evaluation of the associated concentration profile. The results evaluated the effectiveness of the Lorentzian distribution methodology for estimating near field releases and emphasized the need to place air-monitoring stations appropriately for complete concentration characterization. Additionally, the importance of the sampling period and operational conditions were discussed to balance operational feedback and the reporting of public dose.

  16. Stress release model and proxy measures of earthquake size. Application to Italian seismogenic sources

    NASA Astrophysics Data System (ADS)

    Varini, Elisa; Rotondi, Renata; Basili, Roberto; Barba, Salvatore

    2016-07-01

    This study presents a series of self-correcting models that are obtained by integrating information about seismicity and fault sources in Italy. Four versions of the stress release model are analyzed, in which the evolution of the system over time is represented by the level of strain, moment, seismic energy, or energy scaled by the moment. We carry out the analysis on a regional basis by subdividing the study area into eight tectonically coherent regions. In each region, we reconstruct the seismic history and statistically evaluate the completeness of the resulting seismic catalog. Following the Bayesian paradigm, we apply Markov chain Monte Carlo methods to obtain parameter estimates and a measure of their uncertainty expressed by the simulated posterior distribution. The comparison of the four models through the Bayes factor and an information criterion provides evidence (to different degrees depending on the region) in favor of the stress release model based on the energy and the scaled energy. Therefore, among the quantities considered, this turns out to be the measure of the size of an earthquake to use in stress release models. At any instant, the time to the next event turns out to follow a Gompertz distribution, with a shape parameter that depends on time through the value of the conditional intensity at that instant. In light of this result, the issue of forecasting is tackled through both retrospective and prospective approaches. Retrospectively, the forecasting procedure is carried out on the occurrence times of the events recorded in each region, to determine whether the stress release model reproduces the observations used in the estimation procedure. Prospectively, the estimates of the time to the next event are compared with the dates of the earthquakes that occurred after the end of the learning catalog, in the 2003-2012 decade.

  17. The relationship between press release and newspaper coverage of tobacco-related issues in South Korea.

    PubMed

    Cho, Kyung Sook; Yoon, Jangho

    2017-08-01

    This study investigates an association between press release and news media response on tobacco-related issues in South Korea. We retrieved 231 tobacco-related newspaper articles from all major dailies throughout the year 2005. In total, 37 press releases on tobacco-related issues and policies published by the Korea Ministry of Health and Welfare were obtained from the Ministry website. Content analysis and appropriate statistical tests were performed. Results from our content analysis suggest that producing more press releases on tobacco-related issues may result in a greater volume of newspaper articles, and that a press release on a new topical issue may effect more intense media coverage. Findings also show that when Korean newspaper articles overall held less favorable views of tobacco-related policies and programs in 2005, taxation was the most frequent theme with a non-positive opinion. Findings from our multivariate logistic regression models imply that a newspaper article with a source press release-especially about a new topical issue-is more likely than an article without a source press release to discuss tobacco-related issues more positively. Our findings suggest that a press release may serve as an effective media strategy for reaching out to the public by disseminating tobacco-control efforts and policies.

  18. Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Nathan C.; Gauntt, Randall O.

    Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less

  19. Non-domestic phosphorus release in rivers during low-flow: Mechanisms and implications for sources identification

    NASA Astrophysics Data System (ADS)

    Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael

    2018-05-01

    A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.

  20. Escherichia coli Survival in, and Release from, White-Tailed Deer Feces

    PubMed Central

    Fry, Jessica; Ives, Rebecca L.; Rose, Joan B.

    2014-01-01

    White-tailed deer are an important reservoir for pathogens that can contribute a large portion of microbial pollution in fragmented agricultural and forest landscapes. The scarcity of experimental data on survival of microorganisms in and release from deer feces makes prediction of their fate and transport less reliable and development of efficient strategies for environment protection more difficult. The goal of this study was to estimate parameters for modeling Escherichia coli survival in and release from deer (Odocoileus virginianus) feces. Our objectives were as follows: (i) to measure survival of E. coli in deer pellets at different temperatures, (ii) to measure kinetics of E. coli release from deer pellets at different rainfall intensities, and (iii) to estimate parameters of models describing survival and release of microorganisms from deer feces. Laboratory experiments were conducted to study E. coli survival in deer pellets at three temperatures and to estimate parameters of Chick's exponential model with temperature correction based on the Arrhenius equation. Kinetics of E. coli release from deer pellets were measured at two rainfall intensities and used to derive the parameters of Bradford-Schijven model of bacterial release. The results showed that parameters of the survival and release models obtained for E. coli in this study substantially differed from those obtained by using other source materials, e.g., feces of domestic animals and manures. This emphasizes the necessity of comprehensive studies of survival of naturally occurring populations of microorganisms in and release from wildlife animal feces in order to achieve better predictions of microbial fate and transport in fragmented agricultural and forest landscapes. PMID:25480751

  1. Observational clues to the energy release process in impulsive solar bursts

    NASA Technical Reports Server (NTRS)

    Batchelor, David

    1990-01-01

    The nature of the energy release process that produces impulsive bursts of hard X-rays and microwaves during solar flares is discussed, based on new evidence obtained using the method of Crannell et al. (1978). It is shown that the hard X-ray spectral index gamma is negatively correlated with the microwave peak frequency, suggesting a common source for the microwaves and X-rays. The thermal and nonthermal models are compared. It is found that the most straightforward explanations for burst time behavior are shock-wave particle acceleration in the nonthermal model and thermal conduction fronts in the thermal model.

  2. Emergency Preparedness technology support to the Health and Safety Executive (HSE), Nuclear Installations Inspectorate (NII) of the United Kingdom. Appendix A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Kula, K.R.

    1994-03-01

    The Nuclear Installations Inspectorate (NII) of the United Kingdom (UK) suggested the use of an accident progression logic model method developed by Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) for K Reactor to predict the magnitude and timing of radioactivity releases (the source term) based on an advanced logic model methodology. Predicted releases are output from the personal computer-based model in a level-of-confidence format. Additional technical discussions eventually led to a request from the NII to develop a proposal for assembling a similar technology to predict source terms for the UK`s advanced gas-cooled reactor (AGR) type.more » To respond to this request, WSRC is submitting a proposal to provide contractual assistance as specified in the Scope of Work. The work will produce, document, and transfer technology associated with a Decision-Oriented Source Term Estimator for Emergency Preparedness (DOSE-EP) for the NII to apply to AGRs in the United Kingdom. This document, Appendix A is a part of this proposal.« less

  3. A Novel Approach to model EPIC variable background

    NASA Astrophysics Data System (ADS)

    Marelli, M.; De Luca, A.; Salvetti, D.; Belfiore, A.

    2017-10-01

    One of the main aim of the EXTraS (Exploring the X-ray Transient and variable Sky) project is to characterise the variability of serendipitous XMM-Newton sources within each single observation. Unfortunately, 164 Ms out of the 774 Ms of cumulative exposure considered (21%) are badly affected by soft proton flares, hampering any classical analysis of field sources. De facto, the latest releases of the 3XMM catalog, as well as most of the analysis in literature, simply exclude these 'high background' periods from analysis. We implemented a novel SAS-indipendent approach to produce background-subtracted light curves, which allows to treat the case of very faint sources and very bright proton flares. EXTraS light curves of 3XMM-DR5 sources will be soon released to the community, together with new tools we are developing.

  4. Modeling and mitigation of denitrification 'woodchip' bioreactor phosphorus releases during treatment of aquaculture wastewater

    USDA-ARS?s Scientific Manuscript database

    Denitrification 'woodchip' bioreactors designed to remove nitrate from agricultural waters may either be phosphorus sources or sinks. A 24 d batch test showed woodchip leaching is an important source of phosphorus during bioreactor start-up with a leaching potential of approximately 20 -30 mg P per ...

  5. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  6. Myocardial Drug Distribution Generated from Local Epicardial Application: Potential Impact of Cardiac Capillary Perfusion in a Swine Model Using Epinephrine

    PubMed Central

    Maslov, Mikhail Y.; Edelman, Elazer R.; Pezone, Matthew J.; Wei, Abraham E.; Wakim, Matthew G.; Murray, Michael R.; Tsukada, Hisashi; Gerogiannis, Iraklis S.; Groothuis, Adam; Lovich, Mark A.

    2014-01-01

    Prior studies in small mammals have shown that local epicardial application of inotropic compounds drives myocardial contractility without systemic side effects. Myocardial capillary blood flow, however, may be more significant in larger species than in small animals. We hypothesized that bulk perfusion in capillary beds of the large mammalian heart enhances drug distribution after local release, but also clears more drug from the tissue target than in small animals. Epicardial (EC) drug releasing systems were used to apply epinephrine to the anterior surface of the left heart of swine in either point-sourced or distributed configurations. Following local application or intravenous (IV) infusion at the same dose rates, hemodynamic responses, epinephrine levels in the coronary sinus and systemic circulation, and drug deposition across the ventricular wall, around the circumference and down the axis, were measured. EC delivery via point-source release generated transmural epinephrine gradients directly beneath the site of application extending into the middle third of the myocardial thickness. Gradients in drug deposition were also observed down the length of the heart and around the circumference toward the lateral wall, but not the interventricular septum. These gradients extended further than might be predicted from simple diffusion. The circumferential distribution following local epinephrine delivery from a distributed source to the entire anterior wall drove drug toward the inferior wall, further than with point-source release, but again, not to the septum. This augmented drug distribution away from the release source, down the axis of the left ventricle, and selectively towards the left heart follows the direction of capillary perfusion away from the anterior descending and circumflex arteries, suggesting a role for the coronary circulation in determining local drug deposition and clearance. The dominant role of the coronary vasculature is further suggested by the elevated drug levels in the coronary sinus effluent. Indeed, plasma levels, hemodynamic responses, and myocardial deposition remote from the point of release were similar following local EC or IV delivery. Therefore, the coronary vasculature shapes the pharmacokinetics of local myocardial delivery of small catecholamine drugs in large animal models. Optimal design of epicardial drug delivery systems must consider the underlying bulk capillary perfusion currents within the tissue to deliver drug to tissue targets and may favor therapeutic molecules with better potential retention in myocardial tissue. PMID:25234821

  7. Evaluating vertical concentration profile of carbon source released from slow-releasing carbon source tablets and in situ biological nitrate denitrification activity

    NASA Astrophysics Data System (ADS)

    Yeum, Y.; HAN, K.; Yoon, J.; Lee, J. H.; Song, K.; Kang, J. H.; Park, C. W.; Kwon, S.; Kim, Y.

    2017-12-01

    Slow-releasing carbon source tablets were manufactured during the design of a small-scale in situ biological denitrification system to reduce high-strength nitrate (> 30 mg N/L) from a point source such as livestock complexes. Two types of slow-releasing tablets, precipitating tablet (PT, apparent density of 2.0 g/mL) and floating tablet (FT), were prepared to achieve a vertically even distribution of carbon source (CS) in a well and an aquifer. Hydroxypropyl methylcellulose (HPMC) was used to control the release rate, and microcrystalline cellulose pH 101 (MCC 101) was added as a binder. The #8 sand was used as a precipitation agent for the PTs, and the floating agents for the FTs were calcium carbonate and citric acid. FTs floated within 30 min. and remained in water because of the buoyance from carbon dioxide, which formed during the acid-base reaction between citric acid and calcium carbonate. The longevities of PTs with 300 mg of HPMC and FTs with 400 mg of HPMC were 25.4 days and 37.3 days, respectively. We assessed vertical CS profile in a continuous flowing physical aquifer model (release test, RT) and its efficiency on biological nitrate denitrification (denitrification test, DT). During the RT, PTs, FTs and a tracer (as 1 mg rhodamine B/L) were initially injected into a well of physical aquifer model (PAM). Concentrations of CS and the tracer were monitored along the streamline in the PAM to evaluate vertical profile of CS. During the DT, the same experiment was performed as RT, except continuous injection of solution containing 30 mg N/L into the PAM to evaluate biological denitrification activity. As a result of RT, temporal profiles of CS were similar at 3 different depths of monitoring wells. These results suggest that simultaneous addition of PT and FT be suitable for achieving a vertically even distribution of the CS in the injection well and an aquifer. In DT, similar profile of CS was detected in the injection well, and nitrate was biologically denitrified at downstream of the injection well. In conclusion, addition of PT and FT into a well under natural gradient condition may be an effective means for remediating high-strength nitrate in groundwater.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, Benjamin; Jones, Jeremy B.; Schuur, Edward A.

    As the permafrost region warms, its large organic carbon pool will be increasingly vulnerable to decomposition, combustion, and hydrologic export. Models predict that some portion of this release will be offset by increased production of Arctic and boreal biomass; however, the lack of robust estimates of net carbon balance increases the risk of further overshooting international emissions targets. Precise empirical or model-based assessments of the critical factors driving carbon balance are unlikely in the near future, so to address this gap, we present estimates from 98 permafrost-region experts of the response of biomass, wildfire, and hydrologic carbon flux to climatemore » change. Results suggest that contrary to model projections, total permafrost-region biomass could decrease due to water stress and disturbance, factors that are not adequately incorporated in current models. Assessments indicate that end-of-the-century organic carbon release from Arctic rivers and collapsing coastlines could increase by 75% while carbon loss via burning could increase four-fold. Experts identified water balance, shifts in vegetation community, and permafrost degradation as the key sources of uncertainty in predicting future system response. In combination with previous findings, results suggest the permafrost region will become a carbon source to the atmosphere by 2100 regardless of warming scenario but that 65%–85% of permafrost carbon release can still be avoided if human emissions are actively reduced.« less

  9. Predicting seed dispersal using a Lagrangian Stochastic Model

    NASA Astrophysics Data System (ADS)

    Hsieh, C. I.; Chen, C. W.; Su, M. D.

    2017-12-01

    Migration and expansion of a plant species are determined by longdistance dispersion (LDD). A more sophisticated mechanical dispersion model is needed for mimicking LDD of wind-driven seeds. This study simulated seed dispersion trajectories in canopy turbulence by using the Lagrangian stochastic dispersion model under varying atmospheric stabilities in conjunction with the effects of turbulent kinetic energy dissipation rate intermittency. The effects of friction velocity, seed release height, and seed terminal velocity were also studied. The results showed that both the unstable atmosphere and the inclusion of the dissipation rate intermittency in the model could increase seeds' LDD. The number of seeds that escape the canopy volume by dissipation intermittency is increased under unstable atmospheric conditions. As a result, more seeds can be transported a further distance. When dissipation intermittency is included under astrong unstable atmosphere, the peak location of dispersal kernel tends to be closer to the source. Contrasting this, under both neutral and stable conditions when LDD of both are similar, the peak location will be further away from the source. However higher friction velocity, higher seed release height, and lower seed terminal velocity will all increase the LDD of seeds irregardless of atmospheric conditions. The change of LDD due to change in friction velocity, seed release height, or the seed terminal velocity, would be heightened under unstable conditions

  10. An experimental study of the impact of trees and urban form on the turbulent dispersion of heavy particles from near ground point sources

    NASA Astrophysics Data System (ADS)

    Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.

    2016-12-01

    Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.

  11. An experimental study of the impact of trees and urban form on the turbulent dispersion of heavy particles from near ground point sources

    NASA Astrophysics Data System (ADS)

    Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.

    2017-12-01

    Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.

  12. Characterizing use-phase chemical releases, fate, and disposal for modeling longitudinal human exposures to consumer products

    EPA Science Inventory

    The US EPA’s Human Exposure Model (HEM) is an integrated modeling system to estimate human exposure to chemicals in household consumer products. HEM consists of multiple modules, which may be run either together, or independently. The Source-to-Dose (S2D) module in HEM use...

  13. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  14. Groundwater drainage from fissures as a source for lahars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, P. J.; Valentine, G. A.; Stauffer, P. H.

    One mechanism for generating lahars at volcanoes experiencing unrest is the disruption of internal aquifers. These disruptions can trigger releases of large quantities of groundwater. An example of such aquifer disruption occurred at Nevado del Huila Volcano, Colombia, during February and April 2007 when large fractures formed across the summit area of the volcano and lahars were emitted from them. Previous work interpreted that lahar volumes could not be accounted for by melted glacial snow or precipitation, and by elimination suggested that the primary water source was groundwater. Conceptual models have been developed for perched, confined aquifers that have beenmore » heated and pressurized by magma intrusions, followed by sudden pressure release and water emission during fracture formation. In this paper, we consider an alternative end member wherein water release from large fissures at volcanoes is driven by simple gravity drainage. We apply numerical modeling to quantify water discharge from the porous medium surrounding a fissure with a low-elevation free exit. If a long fracture with high vertical extent (on the order of hundreds of meters) intersects a highly connected saturated porous medium, large volumes (on order 10 3 m 3/m of crack length) of water may be released within tens of minutes. The drainage rates from the model may be adequate to account for the Nevado del Huila events if the medium surrounding the crack contains a large volume of water and has high horizontal permeability. Finally, this simple but poorly understood mechanism can present a hazard on its own or compound other processes releasing water from volcanoes.« less

  15. Groundwater drainage from fissures as a source for lahars

    NASA Astrophysics Data System (ADS)

    Johnson, P. J.; Valentine, G. A.; Stauffer, P. H.; Lowry, C. S.; Sonder, I.; Pulgarín, B. A.; Santacoloma, C. C.; Agudelo, A.

    2018-04-01

    One mechanism for generating lahars at volcanoes experiencing unrest is the disruption of internal aquifers. These disruptions can trigger releases of large quantities of groundwater. An example of such aquifer disruption occurred at Nevado del Huila Volcano, Colombia, during February and April 2007 when large fractures formed across the summit area of the volcano and lahars were emitted from them. Previous work interpreted that lahar volumes could not be accounted for by melted glacial snow or precipitation, and by elimination suggested that the primary water source was groundwater. Conceptual models have been developed for perched, confined aquifers that have been heated and pressurized by magma intrusions, followed by sudden pressure release and water emission during fracture formation. We consider an alternative end member wherein water release from large fissures at volcanoes is driven by simple gravity drainage. We apply numerical modeling to quantify water discharge from the porous medium surrounding a fissure with a low-elevation free exit. If a long fracture with high vertical extent (on the order of hundreds of meters) intersects a highly connected saturated porous medium, large volumes (on order 103 m3/m of crack length) of water may be released within tens of minutes. The drainage rates from the model may be adequate to account for the Nevado del Huila events if the medium surrounding the crack contains a large volume of water and has high horizontal permeability. This simple but poorly understood mechanism can present a hazard on its own or compound other processes releasing water from volcanoes.

  16. Groundwater drainage from fissures as a source for lahars

    DOE PAGES

    Johnson, P. J.; Valentine, G. A.; Stauffer, P. H.; ...

    2018-03-22

    One mechanism for generating lahars at volcanoes experiencing unrest is the disruption of internal aquifers. These disruptions can trigger releases of large quantities of groundwater. An example of such aquifer disruption occurred at Nevado del Huila Volcano, Colombia, during February and April 2007 when large fractures formed across the summit area of the volcano and lahars were emitted from them. Previous work interpreted that lahar volumes could not be accounted for by melted glacial snow or precipitation, and by elimination suggested that the primary water source was groundwater. Conceptual models have been developed for perched, confined aquifers that have beenmore » heated and pressurized by magma intrusions, followed by sudden pressure release and water emission during fracture formation. In this paper, we consider an alternative end member wherein water release from large fissures at volcanoes is driven by simple gravity drainage. We apply numerical modeling to quantify water discharge from the porous medium surrounding a fissure with a low-elevation free exit. If a long fracture with high vertical extent (on the order of hundreds of meters) intersects a highly connected saturated porous medium, large volumes (on order 10 3 m 3/m of crack length) of water may be released within tens of minutes. The drainage rates from the model may be adequate to account for the Nevado del Huila events if the medium surrounding the crack contains a large volume of water and has high horizontal permeability. Finally, this simple but poorly understood mechanism can present a hazard on its own or compound other processes releasing water from volcanoes.« less

  17. Estimating Biases for Regional Methane Fluxes using Co-emitted Tracers

    NASA Astrophysics Data System (ADS)

    Bambha, R.; Safta, C.; Michelsen, H. A.; Cui, X.; Jeong, S.; Fischer, M. L.

    2017-12-01

    Methane is a powerful greenhouse gas, and the development and improvement of emissions models rely on understanding the flux of methane released from anthropogenic sources relative to releases from other sources. Increasing production of shale oil and gas in the mid-latitudes and associated fugitive emissions are suspected to be a dominant contributor to the global methane increase. Landfills, sewage treatment, and other sources may be dominant sources in some parts of the U.S. Large discrepancies between emissions models present a great challenge to reconciling atmospheric measurements with inventory-based estimates for various emissions sectors. Current approaches for measuring regional emissions yield highly uncertain estimates because of the sparsity of measurement sites and the presence of multiple simultaneous sources. Satellites can provide wide spatial coverage at the expense of much lower measurement precision compared to ground-based instruments. Methods for effective assimilation of data from a variety of sources are critically needed to perform regional GHG attribution with existing measurements and to determine how to structure future measurement systems including satellites. We present a hierarchical Bayesian framework to estimate surface methane fluxes based on atmospheric concentration measurements and a Lagrangian transport model (Weather Research and Forecasting and Stochastic Time-Inverted Lagrangian Transport). Structural errors in the transport model are estimated with the help of co-emitted traces species with well defined decay rates. We conduct the analyses at regional scales that are based on similar geographical and meteorological conditions. For regions where data are informative, we further refine flux estimates by emissions sector and infer spatially and temporally varying biases parameterized as spectral random field representations.

  18. NEW IMPROVEMENTS TO MFIRE TO ENHANCE FIRE MODELING CAPABILITIES.

    PubMed

    Zhou, L; Smith, A C; Yuan, L

    2016-06-01

    NIOSH's mine fire simulation program, MFIRE, is widely accepted as a standard for assessing and predicting the impact of a fire on the mine ventilation system and the spread of fire contaminants in coal and metal/nonmetal mines, which has been used by U.S. and international companies to simulate fires for planning and response purposes. MFIRE is a dynamic, transient-state, mine ventilation network simulation program that performs normal planning calculations. It can also be used to analyze ventilation networks under thermal and mechanical influence such as changes in ventilation parameters, external influences such as changes in temperature, and internal influences such as a fire. The program output can be used to analyze the effects of these influences on the ventilation system. Since its original development by Michigan Technological University for the Bureau of Mines in the 1970s, several updates have been released over the years. In 2012, NIOSH completed a major redesign and restructuring of the program with the release of MFIRE 3.0. MFIRE's outdated FORTRAN programming language was replaced with an object-oriented C++ language and packaged into a dynamic link library (DLL). However, the MFIRE 3.0 release made no attempt to change or improve the fire modeling algorithms inherited from its previous version, MFIRE 2.20. This paper reports on improvements that have been made to the fire modeling capabilities of MFIRE 3.0 since its release. These improvements include the addition of fire source models of the t-squared fire and heat release rate curve data file, the addition of a moving fire source for conveyor belt fire simulations, improvement of the fire location algorithm, and the identification and prediction of smoke rollback phenomena. All the improvements discussed in this paper will be termed as MFIRE 3.1 and released by NIOSH in the near future.

  19. Dose assessment for various coals in the coal-fired power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antic, D.; Sokcic-Kostic, M.

    1993-01-01

    The radiation exposure of the public in the vicinity of a coal-fired power plant has been studied. The experimental data on uranium, thorium, and potassium content in selected coals from Serbia and Bosnia have been used to calculate the release rates of natural radionuclides from the power plant. A generalized model for analysis of radiological impact of an energy source that includes the two-dimensional version of the cloud model simulates the transport of radionuclides released to the atmosphere. The inhalation dose rates are assessed for various meteorological conditions.

  20. A method for simulating the release of natural gas from the rupture of high-pressure pipelines in any terrain.

    PubMed

    Deng, Yajun; Hu, Hongbing; Yu, Bo; Sun, Dongliang; Hou, Lei; Liang, Yongtu

    2018-01-15

    The rupture of a high-pressure natural gas pipeline can pose a serious threat to human life and environment. In this research, a method has been proposed to simulate the release of natural gas from the rupture of high-pressure pipelines in any terrain. The process of gas releases from the rupture of a high-pressure pipeline is divided into three stages, namely the discharge, jet, and dispersion stages. Firstly, a discharge model is established to calculate the release rate of the orifice. Secondly, an improved jet model is proposed to obtain the parameters of the pseudo source. Thirdly, a fast-modeling method applicable to any terrain is introduced. Finally, based upon these three steps, a dispersion model, which can take any terrain into account, is established. Then, the dispersion scenarios of released gas in four different terrains are studied. Moreover, the effects of pipeline pressure, pipeline diameter, wind speed and concentration of hydrogen sulfide on the dispersion scenario in real terrain are systematically analyzed. The results provide significant guidance for risk assessment and contingency planning of a ruptured natural gas pipeline. Copyright © 2017. Published by Elsevier B.V.

  1. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. CrossWater - Modelling micropollutant loads from different sources in the Rhine basin

    NASA Astrophysics Data System (ADS)

    Moser, Andreas; Bader, Hans-Peter; Fenicia, Fabrizio; Scheidegger, Ruth; Stamm, Christian

    2016-04-01

    The pressure on rivers from micropollutants (MPs) originating from various sources is a growing environmental issue and requiring political regulations. The challenges for the water management are numerous, particularly for international water basins. Spatial knowledge of MP sources and the water quality are prerequisites for an effective water quality policy. In this study we analyze the sources of MPs in the international Rhine basin in Europe, and model their transport to the streams. The spatial patterns of MP loads and concentrations from different use classes are investigated with a mass flow analysis and compared to the territorial jurisdictions that shape the spatial arrangement of water management. The source area of MPs depends on the specific use of a compound. Here, we focus on i) herbicides from agricultural land use, ii) biocides from material protection on buildings and iii) human pharmaceuticals from households. The total mass of MPs available for release to the stream network is estimated from statistical application and consumption data. The available mass of MPs is spatially distributed to the catchments areas based on GIS data of agricultural land use, vector data of buildings and wastewater treatment plant (WWTP) locations, respectively. The actual release of MPs to the stream network is calculated with empirical loss rates related to river discharge for agricultural herbicides and to precipitation for biocides. For the pharmaceuticals the release is coupled to the human metabolism rates and elimination rates in WWTP. The released loads from the catchments are propagated downstream with hydraulic routing. Water flow, transport and fate of the substances are simulated within linked river reaches. Time series of herbicide concentrations and loads are simulated for the main rivers in the Rhine basin. Accordingly the loads from the primary catchments are aggregated and constitute lateral or upstream input to the simulated river reaches. Pronounced differences in the spatial patterns of concentrations in the aquatic system are observed between the different compounds. The comparison with measurements from monitoring stations along the Rhine yield satisfactory results.

  3. The Future of ECHO: Evaluating Open Source Possibilities

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.

  4. The Impact of Pollution Prevention on Toxic Environmental Releases from U.S. Manufacturing Facilities.

    PubMed

    Ranson, Matthew; Cox, Brendan; Keenan, Cheryl; Teitelbaum, Daniel

    2015-11-03

    Between 1991 and 2012, the facilities that reported to the U.S. Environmental Protection Agency's Toxic Release Inventory (TRI) Program conducted 370,000 source reduction projects. We use this data set to conduct the first quasi-experimental retrospective evaluation of how implementing a source reduction (pollution prevention) project affects the quantity of toxic chemicals released to the environment by an average industrial facility. We use a differences-in-differences methodology, which measures how implementing a source reduction project affects a facility's releases of targeted chemicals, relative to releases of (a) other untargeted chemicals from the same facility, or (b) the same chemical from other facilities in the same industry. We find that the average source reduction project causes a 9-16% decrease in releases of targeted chemicals in the year of implementation. Source reduction techniques vary in effectiveness: for example, raw material modification causes a large decrease in releases, while inventory control has no detectable effect. Our analysis suggests that in aggregate, the source reduction projects carried out in the U.S. since 1991 have prevented between 5 and 14 billion pounds of toxic releases.

  5. ANALYTIC ELEMENT MODELING FOR SOURCE WATER ASSESSMENTS OF PUBLIC WATER SUPPLY WELLS: CASE STUDIES IN GLACIAL OUTWASH AND BASIN-AND-RANGE

    EPA Science Inventory

    Over the last 10 years the EPA has invested in analytic elements as a computational method used in public domain software supporting capture zone delineation for source water assessments and wellhead protection. The current release is called WhAEM2000 (wellhead analytic element ...

  6. Escherichia coli survival in, and release from, white-tailed deer feces.

    PubMed

    Guber, Andrey K; Fry, Jessica; Ives, Rebecca L; Rose, Joan B

    2015-02-01

    White-tailed deer are an important reservoir for pathogens that can contribute a large portion of microbial pollution in fragmented agricultural and forest landscapes. The scarcity of experimental data on survival of microorganisms in and release from deer feces makes prediction of their fate and transport less reliable and development of efficient strategies for environment protection more difficult. The goal of this study was to estimate parameters for modeling Escherichia coli survival in and release from deer (Odocoileus virginianus) feces. Our objectives were as follows: (i) to measure survival of E. coli in deer pellets at different temperatures, (ii) to measure kinetics of E. coli release from deer pellets at different rainfall intensities, and (iii) to estimate parameters of models describing survival and release of microorganisms from deer feces. Laboratory experiments were conducted to study E. coli survival in deer pellets at three temperatures and to estimate parameters of Chick's exponential model with temperature correction based on the Arrhenius equation. Kinetics of E. coli release from deer pellets were measured at two rainfall intensities and used to derive the parameters of Bradford-Schijven model of bacterial release. The results showed that parameters of the survival and release models obtained for E. coli in this study substantially differed from those obtained by using other source materials, e.g., feces of domestic animals and manures. This emphasizes the necessity of comprehensive studies of survival of naturally occurring populations of microorganisms in and release from wildlife animal feces in order to achieve better predictions of microbial fate and transport in fragmented agricultural and forest landscapes. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  7. High-frequency source radiation during the 2011 Tohoku-Oki earthquake, Japan, inferred from KiK-net strong-motion seismograms

    NASA Astrophysics Data System (ADS)

    Kumagai, Hiroyuki; Pulido, Nelson; Fukuyama, Eiichi; Aoi, Shin

    2013-01-01

    investigate source processes of the 2011 Tohoku-Oki earthquake, we utilized a source location method using high-frequency (5-10 Hz) seismic amplitudes. In this method, we assumed far-field isotropic radiation of S waves, and conducted a spatial grid search to find the best fitting source locations along the subducted slab in each successive time window. Our application of the method to the Tohoku-Oki earthquake resulted in artifact source locations at shallow depths near the trench caused by limited station coverage and noise effects. We then assumed various source node distributions along the plate, and found that the observed seismograms were most reasonably explained when assuming deep source nodes. This result suggests that the high-frequency seismic waves were radiated at deeper depths during the earthquake, a feature which is consistent with results obtained from teleseismic back-projection and strong-motion source model studies. We identified three high-frequency subevents, and compared them with the moment-rate function estimated from low-frequency seismograms. Our comparison indicated that no significant moment release occurred during the first high-frequency subevent and the largest moment-release pulse occurred almost simultaneously with the second high-frequency subevent. We speculated that the initial slow rupture propagated bilaterally from the hypocenter toward the land and trench. The landward subshear rupture propagation consisted of three successive high-frequency subevents. The trenchward propagation ruptured the strong asperity and released the largest moment near the trench.

  8. Bioaerosol releases from compost facilities: Evaluating passive and active source terms at a green waste facility for improved risk assessments

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Longhurst, P. J.; Smith, R.; Pollard, S. J. T.

    The passive and active release of bioaerosols during green waste composting, measured at source is reported for a commercial composting facility in South East (SE) England as part of a research programme focused on improving risk assessments at composting facilities. Aspergillus fumigatus and actinomycetes concentrations of 9.8-36.8×10 6 and 18.9-36.0×10 6 cfu m -3, respectively, measured during the active turning of green waste compost, were typically 3-log higher than previously reported concentrations from static compost windrows. Source depletion curves constructed for A. fumigatus during compost turning and modelled using SCREEN3 suggest that bioaerosol concentrations could reduce to background concentrations of 10 3 cfu m -3 within 100 m of this site. Authentic source term data produced from this study will help to refine the risk assessment methodologies that support improved permitting of compost facilities.

  9. Groundwater Source Identification Using Backward Fractional-Derivative Models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Sun, H.; Zheng, C.

    2017-12-01

    The forward Fractional Advection Dispersion Equation (FADE) provides a useful model for non-Fickian transport in heterogeneous porous media. This presentation introduces the corresponding backward FADE model, to identify groundwater source location and release time. The backward method is developed from the theory of inverse problems, and the resultant backward FADE differs significantly from the traditional backward ADE because the fractional derivative is not self-adjoint and the probability density function for backward locations is highly skewed. Finally, the method is validated using tracer data from well-known field experiments.

  10. Rosetta3: An Object-Oriented Software Suite for the Simulation and Design of Macromolecules

    PubMed Central

    Leaver-Fay, Andrew; Tyka, Michael; Lewis, Steven M.; Lange, Oliver F.; Thompson, James; Jacak, Ron; Kaufman, Kristian; Renfrew, P. Douglas; Smith, Colin A.; Sheffler, Will; Davis, Ian W.; Cooper, Seth; Treuille, Adrien; Mandell, Daniel J.; Richter, Florian; Ban, Yih-En Andrew; Fleishman, Sarel J.; Corn, Jacob E.; Kim, David E.; Lyskov, Sergey; Berrondo, Monica; Mentzer, Stuart; Popović, Zoran; Havranek, James J.; Karanicolas, John; Das, Rhiju; Meiler, Jens; Kortemme, Tanja; Gray, Jeffrey J.; Kuhlman, Brian; Baker, David; Bradley, Philip

    2013-01-01

    We have recently completed a full re-architecturing of the Rosetta molecular modeling program, generalizing and expanding its existing functionality. The new architecture enables the rapid prototyping of novel protocols by providing easy to use interfaces to powerful tools for molecular modeling. The source code of this rearchitecturing has been released as Rosetta3 and is freely available for academic use. At the time of its release, it contained 470,000 lines of code. Counting currently unpublished protocols at the time of this writing, the source includes 1,285,000 lines. Its rapid growth is a testament to its ease of use. This document describes the requirements for our new architecture, justifies the design decisions, sketches out central classes, and highlights a few of the common tasks that the new software can perform. PMID:21187238

  11. Modelling of catastrophic flashing releases.

    PubMed

    Deaves, D M; Gilham, S; Mitchell, B H; Woodburn, P; Shepherd, A M

    2001-11-16

    Several low boiling point materials are stored in closed vessels at ambient temperature, using their own vapour pressure to maintain a liquid state. These materials are often toxic, flammable, or both, and thus any uncontrolled release can have potentially disastrous consequences. There are many ways in which an accidental release can occur, the most severe being due to catastrophic vessel failure. Although not the most common, this mode of failure has the potential to result in an instantaneous loss of the entire vessel inventory in the form of a rapidly expanding, two-phase, vaporising cloud. This paper provides a comprehensive review of the physical processes of existing models and of available experimental and incident data to model such scenarios. Subsequently, this has enabled the development of an improved methodology for the characterisation of the source conditions following catastrophic vessel failures.

  12. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  13. Enhanced biological phosphorus removal in a sequencing batch reactor using propionate as the sole carbon source.

    PubMed

    Pijuan, M; Saunders, A M; Guisasola, A; Baeza, J A; Casas, C; Blackall, L L

    2004-01-05

    An enhanced biological phosphorus removal (EBPR) system was developed in a sequencing batch reactor (SBR) using propionate as the sole carbon source. The microbial community was followed using fluorescence in situ hybridization (FISH) techniques and Candidatus 'Accumulibacter phosphatis' were quantified from the start up of the reactor until steady state. A series of SBR cycle studies was performed when 55% of the SBR biomass was Accumulibacter, a confirmed polyphosphate accumulating organism (PAO) and when Candidatus 'Competibacter phosphatis', a confirmed glycogen-accumulating organism (GAO), was essentially undetectable. These experiments evaluated two different carbon sources (propionate and acetate), and in every case, two different P-release rates were detected. The highest rate took place while there was volatile fatty acid (VFA) in the mixed liquor, and after the VFA was depleted a second P-release rate was observed. This second rate was very similar to the one detected in experiments performed without added VFA.A kinetic and stoichiometric model developed as a modification of Activated Sludge Model 2 (ASM2) including glycogen economy, was fitted to the experimental profiles. The validation and calibration of this model was carried out with the cycle study experiments performed using both VFAs. The effect of pH from 6.5 to 8.0 on anaerobic P-release and VFA-uptake and aerobic P-uptake was also studied using propionate. The optimal overall working pH was around 7.5. This is the first study of the microbial community involved in EBPR developed with propionate as a sole carbon source along with detailed process performance investigations of the propionate-utilizing PAOs. Copyright 2003 Wiley Periodicals, Inc.

  14. Antarctic icebergs melt over the Southern Ocean : Climatology and impact on sea ice

    NASA Astrophysics Data System (ADS)

    Merino, Nacho; Le Sommer, Julien; Durand, Gael; Jourdain, Nicolas C.; Madec, Gurvan; Mathiot, Pierre; Tournadre, Jean

    2016-08-01

    Recent increase in Antarctic freshwater release to the Southern Ocean is suggested to contribute to change in water masses and sea ice. However, climate models differ in their representation of the freshwater sources. Recent improvements in altimetry-based detection of small icebergs and in estimates of the mass loss of Antarctica may help better constrain the values of Antarctic freshwater releases. We propose a model-based seasonal climatology of iceberg melt over the Southern Ocean using state-of-the-art observed glaciological estimates of the Antarctic mass loss. An improved version of a Lagrangian iceberg model is coupled with a global, eddy-permitting ocean/sea ice model and compared to small icebergs observations. Iceberg melt increases sea ice cover, about 10% in annual mean sea ice volume, and decreases sea surface temperature over most of the Southern Ocean, but with distinctive regional patterns. Our results underline the importance of improving the representation of Antarctic freshwater sources. This can be achieved by forcing ocean/sea ice models with a climatological iceberg fresh-water flux.

  15. Clathrate hydrates as possible source of episodic methane releases on Mars

    NASA Astrophysics Data System (ADS)

    Karatekin, Özgür; Gloesener, Elodie; Temel, Orkun

    2017-04-01

    Methane has been shown to vary with location and time in the Martian atmosphere, with abundances of up to tens of parts-per-billion by volume (ppbv). Since methane is short-lived on geological time scales, its presence implies the existence of an active, current source of methane that is yet to be understood. In this study we investigate the destabilization of subsurface reservoirs of clathrate hydrates as a possible geological source of methane. Clathrate hydrates are crystalline compounds constituted by cages of hydrogen-bonded water molecules, inside of which guest gas molecules are trapped. We show the present-day maps of methane clathrate stability zones, in particular in the vicinity of Gale Crater where the Sample Analysis at Mars (SAM) suite on the Curiosity rover has made in situ measurements of atmospheric methane, during more than 3 years. Curiosity has observed spikes of elevated methane levels of 7 ppbv on four sequential observations over a 2-month period. The possibility of episodic releases consistent with curiosity observations from a subsurface clathrate source, is investigated using a gas transport through porous Martian regolith considering different depths of reservoirs. Transport of the released methane spike into the atmosphere is simulated using the PlanetWRF model.

  16. Validation of Operational Multiscale Environment Model With Grid Adaptivity (OMEGA).

    DTIC Science & Technology

    1995-12-01

    Center for the period of the Chernobyl Nuclear Accident. The physics of the model is tested using National Weather Service Medium Range Forecast data by...Climatology Center for the first three days following the release at the Chernobyl Nuclear Plant. A user-defined source term was developed to simulate

  17. SIMULATIONS OF AEROSOLS AND PHOTOCHEMICAL SPECIES WITH THE CMAQ PLUME-IN-GRID MODELING SYSTEM

    EPA Science Inventory

    A plume-in-grid (PinG) method has been an integral component of the CMAQ modeling system and has been designed in order to realistically simulate the relevant processes impacting pollutant concentrations in plumes released from major point sources. In particular, considerable di...

  18. Observations of disk-shaped bodies in free flight at terminal velocity

    NASA Technical Reports Server (NTRS)

    Vorreiter, J. W.; Tate, D. L.

    1973-01-01

    Ten disk-shaped models of a proposed nuclear heat source module were released from an aircraft and observed by radar. The initial launch attitude, spin rate, and mass of the models were varied. Significant differences were observed in the mode of flight and terminal velocity among models of different mass and launch attitudes. The data were analyzed to yield lift and drag coefficients as a function of Reynolds number. The total sea-level velocity of the models was found to be well correlated as a function of mass per unit frontal area. The demonstrated terminal velocity of the modular disk heat source, about 27 m/sec for this specific design, is only 33% of that of existing heat source designs.

  19. A Re-Engineered Software Interface and Workflow for the Open-Source SimVascular Cardiovascular Modeling Package.

    PubMed

    Lan, Hongzhi; Updegrove, Adam; Wilson, Nathan M; Maher, Gabriel D; Shadden, Shawn C; Marsden, Alison L

    2018-02-01

    Patient-specific simulation plays an important role in cardiovascular disease research, diagnosis, surgical planning and medical device design, as well as education in cardiovascular biomechanics. simvascular is an open-source software package encompassing an entire cardiovascular modeling and simulation pipeline from image segmentation, three-dimensional (3D) solid modeling, and mesh generation, to patient-specific simulation and analysis. SimVascular is widely used for cardiovascular basic science and clinical research as well as education, following increased adoption by users and development of a GATEWAY web portal to facilitate educational access. Initial efforts of the project focused on replacing commercial packages with open-source alternatives and adding increased functionality for multiscale modeling, fluid-structure interaction (FSI), and solid modeling operations. In this paper, we introduce a major SimVascular (SV) release that includes a new graphical user interface (GUI) designed to improve user experience. Additional improvements include enhanced data/project management, interactive tools to facilitate user interaction, new boundary condition (BC) functionality, plug-in mechanism to increase modularity, a new 3D segmentation tool, and new computer-aided design (CAD)-based solid modeling capabilities. Here, we focus on major changes to the software platform and outline features added in this new release. We also briefly describe our recent experiences using SimVascular in the classroom for bioengineering education.

  20. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of atmospheric dispersion model with improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2014-06-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates associated with reactor pressure changes in Units 2 and 3. The modified WSPEEDI-II simulation using the new source term reproduced local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (MLDP0, HYSPLIT, and NAME) for regional and global calculations and showed good agreement between calculated and observed air concentration and surface deposition of 137Cs in East Japan. Moreover, HYSPLIT model using the new source term also reproduced the plume arrivals at several countries abroad showing a good correlation with measured air concentration data. A large part of deposition pattern of total 131I and 137Cs in East Japan was explained by in-cloud particulate scavenging. However, for the regional scale contaminated areas, there were large uncertainties due to the overestimation of rainfall amounts and the underestimation of fogwater and drizzle depositions. The computations showed that approximately 27% of 137Cs discharged from FNPS1 deposited to the land in East Japan, mostly in forest areas.

  1. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.

  2. Rosin-Rammler Distributions in ANSYS Fluent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunham, Ryan Q.

    In Health Physics monitoring, particles need to be collected and tracked. One method is to predict the motion of potential health hazards with computer models. Particles released from various sources within a glove box can become a respirable health hazard if released into the area surrounding a glove box. The goal of modeling the aerosols in a glove box is to reduce the hazards associated with a leak in the glove box system. ANSYS Fluent provides a number of tools for modeling this type of environment. Particles can be released using injections into the flow path with turbulent properties. Themore » models of particle tracks can then be used to predict paths and concentrations of particles within the flow. An attempt to understand and predict the handling of data by Fluent was made, and results iteratively tracked. Trends in data were studied to comprehend the final results. The purpose of the study was to allow a better understanding of the operation of Fluent for aerosol modeling for future application in many fields.« less

  3. Metal lost and found: dissipative uses and releases of copper in the United States 1975-2000.

    PubMed

    Lifset, Reid J; Eckelman, Matthew J; Harper, E M; Hausfather, Zeke; Urbina, Gonzalo

    2012-02-15

    Metals are used in a variety of ways, many of which lead to dissipative releases to the environment. Such releases are relevant from both a resource use and an environmental impact perspective. We present a historical analysis of copper dissipative releases in the United States from 1975 to 2000. We situate all dissipative releases in copper's life cycle and introduce a conceptual framework by which copper dissipative releases may be categorized in terms of intentionality of use and release. We interpret our results in the context of larger trends in production and consumption and government policies that have served as drivers of intentional copper releases from the relevant sources. Intentional copper releases are found to be both significant in quantity and highly variable. In 1975, for example, the largest source of intentional releases was from the application of copper-based pesticides, and this decreased more than 50% over the next 25 years; all other sources of intentional releases increased during that period. Overall, intentional copper releases decreased by approximately 15% from 1975 to 2000. Intentional uses that are unintentionally released such as copper from roofing, increased by the same percentage. Trace contaminant sources such as fossil fuel combustion, i.e., sources where both the use and the release are unintended, increased by nearly 50%. Intentional dissipative uses are equivalent to 60% of unintentional copper dissipative releases and more than five times that from trace sources. Dissipative copper releases are revealed to be modest when compared to bulk copper flows in the economy, and we introduce a metric, the dissipation index, which may be considered an economy-wide measure of resource efficiency for a particular substance. We assess the importance of dissipative releases in the calculation of recycling rates, concluding that the inclusion of dissipation in recycling rate calculations has a small, but discernible, influence, and should be included in such calculations. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Calculation note for an underground leak which remains underground

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, H.J.

    1997-05-20

    This calculation note supports the subsurface leak accident scenario which remains subsurface. It is assumed that a single walled pipe carrying waste from tank 106-C ruptures, releasing the liquid waste into the soil. In this scenario, the waste does not form a surface pool, but remains subsurface. However, above the pipe is a berm, 0.762 m (2.5 ft) high and 2.44 m (8 ft) wide, and the liquid released from the leak rises into the berm. The slurry line, which transports a source term of higher activity than the sluice line, leaks into the soil at a rate of 5%more » of the maximum flow rate of 28.4 L/s (450 gpm) for twelve hours. The dose recipient was placed a perpendicular distance of 100 m from the pipe. Two source terms were considered, mitigated and unmitigated release as described in section 3.4.1 of UANF-SD-WM-BIO-001, Addendum 1. The unmitigated consisted of two parts of AWF liquid and one part AWF solid. The mitigated release consisted of two parts SST liquid, eighteen parts AWF liquid, nine parts SST solid, and one part AWF solid. The isotopic breakdown of the release in these cases is presented. Two geometries were considered in preliminary investigations, disk source, and rectangular source. Since the rectangular source results from the assumption that the contamination is wicked up into the berm, only six inches of shielding from uncontaminated earth is present, while the disk source, which remains six inches below the level of the surface of the land is often shielded by a thick shield due to the slant path to the dose point. For this reason, only the rectangular source was considered in the final analysis. The source model was a rectangle 2.134 m (7 ft) thick, 0.6096 m (2 ft) high, and 130.899 m (131 ft) long. The top and sides of this rectangular source was covered with earth of density 1.6 g/cm{sup 3} to a thickness of 15.24 cm (6 in). This soil is modeled as 40% void space. The source consisted of earth of the same density with the void spaces filled with the liquid waste which added 0.56 g/cm{sup 3} to the density. The dose point was 100 m (328 ft) away from the berm in a perpendicular direction off the center. The computer code MICROSKYSHINEO was used to calculate the skyshine from the source. This code calculates exposure rate at the receptor point. The photon spectrum from 2 MeV to 0.15 MeV, obtained from ISOSHLD, was used as input, although this did not differ substantially from the results obtained from using Co, 137mBa, and 154Eu. However, this methodology allowed the bremsstrahlung contribution to be included in the skyshine calculation as well as in the direct radiation calculation.« less

  5. Bromine release from blowing snow and its impact on tropospheric chemistry

    NASA Astrophysics Data System (ADS)

    Griffiths, Paul; Yang, Xin; Abraham, N. Luke; Archibald, Alexander; Pyle, John

    2016-04-01

    In the last two decades, significant depletion of boundary layer ozone (ozone depletion events, ODEs) has been observed in both Arctic and Antarctic spring. ODEs are attributed to catalytic destruction by bromine radicals (Br plus BrO), especially during bromine explosion events (BEs), when high concentrations of BrO periodically occur. The source of bromine and the mechanism that sustains the high BrO levels are still the subject of study. Recent work by Pratt et al. (2013) posits Br2 production within saline snow and sea ice which leads to sudden ODEs. Previously, Yang et al. (2008) suggested snow could provide a source of (depleted) sea-salt aerosol if wicked from the surface of ice. They suggest that rapid depletion of bromide from the aerosol will constitute a source of photochemical Bry. Given the large sea ice extent in polar regions, this may constitute a significant source of sea salt and bromine in the polar lower atmosphere. While bromine release from blowing snow is perhaps less likely to trigger sudden ODEs, it may make a contribution to regional scale processes affecting ozone levels. Currently, the model parameterisations of Yang et al. assumes that rapid release of bromine occurs from fresh snow on sea ice during periods of strong wind. The parameterisation depends on an assumed sea-salt aerosol distribution generated via sublimation of the snow above the boundary layer, as well as taking into account the salinity of the snow. In this work, we draw on recent measurements by scientists from the British Antarctic Survey during a cruise aboard the Polarstern in the southern oceans. This has provided an extensive set of measurements of the chemical and physical characteristics of blowing snow over sea ice, and of the aerosol associated with it. Based on the observations, we have developed an improved parameterisation of the release of bromine from blowing snow. The paper presents results from the simulation performed using the United Kingdom Chemistry and Aerosols (UKCA) model, run as a component of the UK Met Office Unified Model, employing the updated parameterisation of Yang et al. We assess the performance of the parameterisation in simulating tropospheric BrO, a review of relevant parameters, as well as a quantitative assessment of the release of sea salt aerosol and its contribution to halogen chemistry in the polar and global atmosphere.

  6. Python-Based Applications for Hydrogeological Modeling

    NASA Astrophysics Data System (ADS)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.

  7. Thermogenic methane release as a cause for the long duration of the PETM

    PubMed Central

    Frieling, Joost; Svensen, Henrik H.; Planke, Sverre; Cramwinckel, Margot J.; Selnes, Haavard; Sluijs, Appy

    2016-01-01

    The Paleocene–Eocene Thermal Maximum (PETM) (∼56 Ma) was a ∼170,000-y (∼170-kyr) period of global warming associated with rapid and massive injections of 13C-depleted carbon into the ocean–atmosphere system, reflected in sedimentary components as a negative carbon isotope excursion (CIE). Carbon cycle modeling has indicated that the shape and magnitude of this CIE are generally explained by a large and rapid initial pulse, followed by ∼50 kyr of 13C-depleted carbon injection. Suggested sources include submarine methane hydrates, terrigenous organic matter, and thermogenic methane and CO2 from hydrothermal vent complexes. Here, we test for the contribution of carbon release associated with volcanic intrusions in the North Atlantic Igneous Province. We use dinoflagellate cyst and stable carbon isotope stratigraphy to date the active phase of a hydrothermal vent system and find it to postdate massive carbon release at the onset of the PETM. Crucially, however, it correlates to the period within the PETM of longer-term 13C-depleted carbon release. This finding represents actual proof of PETM carbon release from a particular reservoir. Based on carbon cycle box model [i.e., Long-Term Ocean–Atmosphere–Sediment Carbon Cycle Reservoir (LOSCAR) model] experiments, we show that 4–12 pulses of carbon input from vent systems over 60 kyr with a total mass of 1,500 Pg of C, consistent with the vent literature, match the shape of the CIE and pattern of deep ocean carbonate dissolution as recorded in sediment records. We therefore conclude that CH4 from the Norwegian Sea vent complexes was likely the main source of carbon during the PETM, following its dramatic onset. PMID:27790990

  8. Strategies to Sustain and Enhance Performance in Stressful Environments

    DTIC Science & Technology

    1990-03-14

    Pressure Switch in his left hand which controlled power to the vacuum source which was only active when the subject was pressing on the Page 6 positive... pressure switch . Internal LBNP chamber vacuum was calibrated with a Wallace & Tierman 1500 Hi-Performance Gauge (Model 61A-1D-0800, Wallace & Tierman...pressure release when the subject released the positive pressure switch without warning. Behavioral testing continued regardless of when LBNP was returned to

  9. A framework for emissions source apportionment in industrial areas: MM5/CALPUFF in a near-field application.

    PubMed

    Ghannam, K; El-Fadel, M

    2013-02-01

    This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.

  10. Release and Removal of Microorganisms from Land-Deposited Animal Waste and Animal Manures: A Review of Data and Models.

    PubMed

    Blaustein, Ryan A; Pachepsky, Yakov A; Shelton, Daniel R; Hill, Robert L

    2015-09-01

    Microbial pathogens present a leading cause of impairment to rivers, bays, and estuaries in the United States, and agriculture is often viewed as the major contributor to such contamination. Microbial indicators and pathogens are released from land-applied animal manure during precipitation and irrigation events and are carried in overland and subsurface flow that can reach and contaminate surface waters and ground water used for human recreation and food production. Simulating the release and removal of manure-borne pathogens and indicator microorganisms is an essential component of microbial fate and transport modeling regarding food safety and water quality. Although microbial release controls the quantities of available pathogens and indicators that move toward human exposure, a literature review on this topic is lacking. This critical review on microbial release and subsequent removal from manure and animal waste application areas includes sections on microbial release processes and release-affecting factors, such as differences in the release of microbial species or groups; bacterial attachment in turbid suspensions; animal source; animal waste composition; waste aging; manure application method; manure treatment effect; rainfall intensity, duration, and energy; rainfall recurrence; dissolved salts and temperature; vegetation and soil; and spatial and temporal scale. Differences in microbial release from liquid and solid manures are illustrated, and the influential processes are discussed. Models used for simulating release and removal and current knowledge gaps are presented, and avenues for future research are suggested. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  11. Radiation signatures from a locally energized flaring loop

    NASA Technical Reports Server (NTRS)

    Emslie, A. G.; Vlahos, L.

    1980-01-01

    The radiation signatures from a locally energized solar flare loop based on the physical properties of the energy release mechanisms were consistent with hard X-ray, microwave, and EUV observations for plausible source parameters. It was found that a suprathermal tail of high energy electrons is produced by the primary energy release, and that the number of energetic charged particles ejected into the interplanetary medium in the model is consistent with observations. The radiation signature model predicts that the intrinsic polarization of the hard X-ray burst should increase over the photon energy range of 20 to 100 keV.

  12. The TENCompetence Infrastructure: A Learning Network Implementation

    NASA Astrophysics Data System (ADS)

    Vogten, Hubert; Martens, Harrie; Lemmers, Ruud

    The TENCompetence project developed a first release of a Learning Network infrastructure to support individuals, groups and organisations in professional competence development. This infrastructure Learning Network infrastructure was released as open source to the community thereby allowing users and organisations to use and contribute to this development as they see fit. The infrastructure consists of client applications providing the user experience and server components that provide the services to these clients. These services implement the domain model (Koper 2006) by provisioning the entities of the domain model (see also Sect. 18.4) and henceforth will be referenced as domain entity services.

  13. Null-space Monte Carlo particle tracking to assess groundwater PCE (Tetrachloroethene) diffuse pollution in north-eastern Milan functional urban area.

    PubMed

    Alberti, Luca; Colombo, Loris; Formentin, Giovanni

    2018-04-15

    The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Near-source air quality in rail yard environments – an overview of recent EPA measurement and modeling findings

    EPA Science Inventory

    This presentation will providing a summary of field measurements conducted in areas surrounding two major rail yards as well as modeling simulations of rail yard emissions dispersion. The Cicero Rail Yard Study (CIRYS) was recently released to the public and includes mobile and ...

  15. Soundscapes

    DTIC Science & Technology

    2013-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  16. Soundscapes

    DTIC Science & Technology

    2012-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  17. Source-term development for a contaminant plume for use by multimedia risk assessment models

    NASA Astrophysics Data System (ADS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    2000-02-01

    Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.

  18. The Aliso Canyon Natural Gas Leak : Large Eddy Simulations for Modeling Atmospheric Dynamics and Interpretation of Observations.

    NASA Astrophysics Data System (ADS)

    Prasad, K.; Thorpe, A. K.; Duren, R. M.; Thompson, D. R.; Whetstone, J. R.

    2016-12-01

    The National Institute of Standards and Technology (NIST) has supported the development and demonstration of a measurement capability to accurately locate greenhouse gas sources and measure their flux to the atmosphere over urban domains. However, uncertainties in transport models which form the basis of all top-down approaches can significantly affect our capability to attribute sources and predict their flux to the atmosphere. Reducing uncertainties between bottom-up and top-down models will require high resolution transport models as well as validation and verification of dispersion models over an urban domain. Tracer experiments involving the release of Perfluorocarbon Tracers (PFTs) at known flow rates offer the best approach for validating dispersion / transport models. However, tracer experiments are limited by cost, ability to make continuous measurements, and environmental concerns. Natural tracer experiments, such as the leak from the Aliso Canyon underground storage facility offers a unique opportunity to improve and validate high resolution transport models, test leak hypothesis, and to estimate the amount of methane released.High spatial resolution (10 m) Large Eddy Simulations (LES) coupled with WRF atmospheric transport models were performed to simulate the dynamics of the Aliso Canyon methane plume and to quantify the source. High resolution forward simulation results were combined with aircraft and tower based in-situ measurements as well as data from NASA airborne imaging spectrometers. Comparison of simulation results with measurement data demonstrate the capability of the LES models to accurately model transport and dispersion of methane plumes over urban domains.

  19. Utilization of 134Cs/137Cs in the environment to identify the reactor units that caused atmospheric releases during the Fukushima Daiichi accident

    NASA Astrophysics Data System (ADS)

    Chino, Masamichi; Terada, Hiroaki; Nagai, Haruyasu; Katata, Genki; Mikami, Satoshi; Torii, Tatsuo; Saito, Kimiaki; Nishizawa, Yukiyasu

    2016-08-01

    The Fukushima Daiichi nuclear power reactor units that generated large amounts of airborne discharges during the period of March 12-21, 2011 were identified individually by analyzing the combination of measured 134Cs/137Cs depositions on ground surfaces and atmospheric transport and deposition simulations. Because the values of 134Cs/137Cs are different in reactor units owing to fuel burnup differences, the 134Cs/137Cs ratio measured in the environment was used to determine which reactor unit ultimately contaminated a specific area. Atmospheric dispersion model simulations were used for predicting specific areas contaminated by each dominant release. Finally, by comparing the results from both sources, the specific reactor units that yielded the most dominant atmospheric release quantities could be determined. The major source reactor units were Unit 1 in the afternoon of March 12, 2011, Unit 2 during the period from the late night of March 14 to the morning of March 15, 2011. These results corresponded to those assumed in our previous source term estimation studies. Furthermore, new findings suggested that the major source reactors from the evening of March 15, 2011 were Units 2 and 3 and that the dominant source reactor on March 20, 2011 temporally changed from Unit 3 to Unit 2.

  20. Utilization of (134)Cs/(137)Cs in the environment to identify the reactor units that caused atmospheric releases during the Fukushima Daiichi accident.

    PubMed

    Chino, Masamichi; Terada, Hiroaki; Nagai, Haruyasu; Katata, Genki; Mikami, Satoshi; Torii, Tatsuo; Saito, Kimiaki; Nishizawa, Yukiyasu

    2016-08-22

    The Fukushima Daiichi nuclear power reactor units that generated large amounts of airborne discharges during the period of March 12-21, 2011 were identified individually by analyzing the combination of measured (134)Cs/(137)Cs depositions on ground surfaces and atmospheric transport and deposition simulations. Because the values of (134)Cs/(137)Cs are different in reactor units owing to fuel burnup differences, the (134)Cs/(137)Cs ratio measured in the environment was used to determine which reactor unit ultimately contaminated a specific area. Atmospheric dispersion model simulations were used for predicting specific areas contaminated by each dominant release. Finally, by comparing the results from both sources, the specific reactor units that yielded the most dominant atmospheric release quantities could be determined. The major source reactor units were Unit 1 in the afternoon of March 12, 2011, Unit 2 during the period from the late night of March 14 to the morning of March 15, 2011. These results corresponded to those assumed in our previous source term estimation studies. Furthermore, new findings suggested that the major source reactors from the evening of March 15, 2011 were Units 2 and 3 and that the dominant source reactor on March 20, 2011 temporally changed from Unit 3 to Unit 2.

  1. Utilization of 134Cs/137Cs in the environment to identify the reactor units that caused atmospheric releases during the Fukushima Daiichi accident

    PubMed Central

    Chino, Masamichi; Terada, Hiroaki; Nagai, Haruyasu; Katata, Genki; Mikami, Satoshi; Torii, Tatsuo; Saito, Kimiaki; Nishizawa, Yukiyasu

    2016-01-01

    The Fukushima Daiichi nuclear power reactor units that generated large amounts of airborne discharges during the period of March 12–21, 2011 were identified individually by analyzing the combination of measured 134Cs/137Cs depositions on ground surfaces and atmospheric transport and deposition simulations. Because the values of 134Cs/137Cs are different in reactor units owing to fuel burnup differences, the 134Cs/137Cs ratio measured in the environment was used to determine which reactor unit ultimately contaminated a specific area. Atmospheric dispersion model simulations were used for predicting specific areas contaminated by each dominant release. Finally, by comparing the results from both sources, the specific reactor units that yielded the most dominant atmospheric release quantities could be determined. The major source reactor units were Unit 1 in the afternoon of March 12, 2011, Unit 2 during the period from the late night of March 14 to the morning of March 15, 2011. These results corresponded to those assumed in our previous source term estimation studies. Furthermore, new findings suggested that the major source reactors from the evening of March 15, 2011 were Units 2 and 3 and that the dominant source reactor on March 20, 2011 temporally changed from Unit 3 to Unit 2. PMID:27546490

  2. Near-IR-induced dissociation of thermally-sensitive star polymers.

    PubMed

    Dai, Yuqiong; Sun, Hao; Pal, Sunirmal; Zhang, Yunlu; Park, Sangwoo; Kabb, Christopher P; Wei, Wei David; Sumerlin, Brent S

    2017-03-01

    Responsive systems sensitive to near-infrared (NIR) light are promising for triggered release due to efficient deep tissue penetration of NIR irradiation relative to higher energy sources ( e.g. , UV), allowing for spatiotemporal control over triggering events with minimal potential for tissue damage. Herein, we report star polymers containing thermally-labile azo linkages that dissociate during conventional heating or during localized heating via the photothermal effect upon NIR irradiation. Controlled release during conventional heating was investigated for the star polymers loaded with a model dye, with negligible release being observed at 25 °C and >80% release at 90 °C. Star polymers co-loaded with NIR-responsive indocyanine green showed rapid dye release upon NIR irradiation ( λ ≥ 715 nm) due to the photothermally-induced degradation of azo linkages within the cores of the star polymers. This approach provides access to a new class of delivery and release systems that can be triggered by noninvasive external stimulation.

  3. Modelled isotopic fractionation and transient diffusive release of methane from potential subsurface sources on Mars

    NASA Astrophysics Data System (ADS)

    Stevens, Adam H.; Patel, Manish R.; Lewis, Stephen R.

    2017-01-01

    We calculate transport timescales of martian methane and investigate the effect of potential release mechanisms into the atmosphere using a numerical model that includes both Fickian and Knudsen diffusion. The incorporation of Knudsen diffusion, which improves on a Fickian description of transport given the low permeability of the martian regolith, means that transport timescales from sources collocated with a putative martian water table are very long, up to several million martian years. Transport timescales also mean that any temporally varying source process, even in the shallow subsurface, would not result in a significant, observable variation in atmospheric methane concentration since changes resulting from small variations in flux would be rapidly obscured by atmospheric transport. This means that a short-lived 'plume' of methane, as detected by Mumma et al. (2009) and Webster et al. (2014), cannot be reconciled with diffusive transport from any reasonable depth and instead must invoke alternative processes such as fracturing or convective plumes. It is shown that transport through the martian regolith will cause a significant change in the isotopic composition of the gas, meaning that methane release from depth will produce an isotopic signature in the atmosphere that could be significantly different than the source composition. The deeper the source, the greater the change, and the change in methane composition in both δ13C and δD approaches -1000 ‰ for sources at a depth greater than around 1 km. This means that signatures of specific sources, in particular the methane produced by biogenesis that is generally depleted in 13CH4 and CH3D, could be obscured. We find that an abiogenic source of methane could therefore display an isotopic fractionation consistent with that expected for biogenic source processes if the source was at sufficient depth. The only unambiguous inference that can be made from measurements of methane isotopes alone is a measured δ13C or δD close to zero or positive implies a shallow, abiogenic source. The effect of transport processes must therefore be carefully considered when attempting to identify the source of any methane observed by future missions, and the severe depletion in heavier isotopologues will have implications for the sensitivity requirements for future missions that aim to measure the isotopic fractionation of methane in the martian atmosphere.

  4. A Community Terrain-Following Ocean Modeling System (ROMS/TOMS)

    DTIC Science & Technology

    2011-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. A Community Terrain-Following Ocean Modeling System (ROMS...732) 932-6555 x266 Fax: (732) 932-6520 email: arango@marine.rutgers.edu Award Number: N00014-10- 1 -0322 http://ocean-modeling.org http...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  5. The energy release and temperature field in the ultracold neutron source of the WWR-M reactor at the Petersburg Nuclear Physics Institute

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serebrov, A. P., E-mail: serebrov@pnpi.spb.ru; Kislitsin, B. V.; Onegin, M. S.

    2016-12-15

    Results of calculations of energy releases and temperature fields in the ultracold neutron source under design at the WWR-M reactor are presented. It is shown that, with the reactor power of 18 MW, the power of energy release in the 40-L volume of the source with superfluid helium will amount to 28.5 W, while 356 W will be released in a liquid-deuterium premoderator. The lead shield between the reactor core and the source reduces the radiative heat release by an order of magnitude. A thermal power of 22 kW is released in it, which is removed by passage of water.more » The distribution of temperatures in all components of the vacuum structure is presented, and the temperature does not exceed 100°C at full reactor power. The calculations performed make it possible to go to design of the source.« less

  6. Tracking the release of IPCC AR5 on Twitter: Users, comments, and sources following the release of the Working Group I Summary for Policymakers.

    PubMed

    Newman, Todd P

    2017-10-01

    Using the immediate release of the Working Group 1 Summary for Policymakers of the Intergovernmental Panel on Climate Change Fifth Assessment Report as a case study, this article seeks to describe what type of actors were most active during the summary release, the substance of the most propagated tweets during the summary release, and the media sources that attracted the most attention during the summary release. The results from the study suggest that non-elite actors, such as individual bloggers and concerned citizens, accounted for the majority of the most propagated tweets in the sample. This study also finds that the majority of the most propagated tweets in the sample focused on public understanding of the report. Finally, while mainstream media sources were the most frequently discussed media sources, a number of new media and science news and information sources compete for audience attention.

  7. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  8. Detailed source term estimation of atmospheric release during the Fukushima Dai-ichi nuclear power plant accident by coupling atmospheric and oceanic dispersion models

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo

    2014-05-01

    Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates associated with reactor pressure variations in Units 2 and 3.

  9. Who messed up my lake?

    EPA Science Inventory

    Initial results from a lake-wide agent based simulation releasing virtual drifters from multiple tributaries over time. We examine the use of agent based modeling to break down the sources contributing to the composition of nearshore waters. Knowing that flow is highly biased in ...

  10. Impact of methane flow through deformable lake sediments on atmospheric release

    NASA Astrophysics Data System (ADS)

    Scandella, B.; Juanes, R.

    2010-12-01

    Methane is a potent greenhouse gas that is generated geothermally and biologically in lake and ocean sediments. Free gas bubbles may escape oxidative traps and contribute more to the atmospheric source than dissolved methane, but the details of the methane release depend on the interactions between the multiple fluid phases and the deformable porous medium. We present a model and supporting laboratory experiments of methane release through “breathing” dynamic flow conduits that open in response to drops in the hydrostatic load on lake sediments, which has been validated against a high-resolution record of free gas flux and hydrostatic pressure in Upper Mystic Lake, MA. In contrast to previous linear elastic fracture mechanics analysis of gassy sediments, the evolution of gas transport in a deformable compliant sediment is presented within the framework of multiphase poroplasticity. Experiments address how strongly the mode and rate of gas flow, captured by our model, impacts the size of bubbles released into the water column. A bubble's size in turn determines how efficiently it transports methane to the atmosphere, and integrating this effect will be critical to improving estimates of the atmospheric methane source from lakes. Cross-sectional schematic of lake sediments showing two venting sites: one open at left and one closed at right. The vertical release of gas bubbles (red) at the open venting site creates a local pressure drop, which drives both bubble formation from the methane-rich pore water (higher concentrations shaded darker red) and lateral advection of dissolved methane (purple arrows). Even as bubbles in the open site escape, those at the closed site remain trapped.

  11. Implications of nutrient release from iron metal for microbial regrowth in water distribution systems.

    PubMed

    Morton, Siyuan C; Zhang, Yan; Edwards, Marc A

    2005-08-01

    Control of microbial regrowth in iron pipes is a major challenge for water utilities. This work examines the inter-relationship between iron corrosion and bacterial regrowth, with a special focus on the potential of iron pipe to serve as a source of phosphorus. Under some circumstances, corroding iron and steel may serve as a source for all macronutrients necessary for bacterial regrowth including fixed carbon, fixed nitrogen and phosphorus. Conceptual models and experimental data illustrate that levels of phosphorus released from corroding iron are significant relative to that necessary to sustain high levels of biofilm bacteria. Consequently, it may not be possible to control regrowth on iron surfaces by limiting phosphorus in the bulk water.

  12. Demonstration of Technologies for Remote and in Situ Sensing of Atmospheric Methane Abundances - a Controlled Release Experiment

    NASA Astrophysics Data System (ADS)

    Aubrey, A. D.; Thorpe, A. K.; Christensen, L. E.; Dinardo, S.; Frankenberg, C.; Rahn, T. A.; Dubey, M.

    2013-12-01

    It is critical to constrain both natural and anthropogenic sources of methane to better predict the impact on global climate change. Critical technologies for this assessment include those that can detect methane point and concentrated diffuse sources over large spatial scales. Airborne spectrometers can potentially fill this gap for large scale remote sensing of methane while in situ sensors, both ground-based and mounted on aerial platforms, can monitor and quantify at small to medium spatial scales. The Jet Propulsion Laboratory (JPL) and collaborators recently conducted a field test located near Casper, WY, at the Rocky Mountain Oilfield Test Center (RMOTC). These tests were focused on demonstrating the performance of remote and in situ sensors for quantification of point-sourced methane. A series of three controlled release points were setup at RMOTC and over the course of six experiment days, the point source flux rates were varied from 50 LPM to 2400 LPM (liters per minute). During these releases, in situ sensors measured real-time methane concentration from field towers (downwind from the release point) and using a small Unmanned Aerial System (sUAS) to characterize spatiotemporal variability of the plume structure. Concurrent with these methane point source controlled releases, airborne sensor overflights were conducted using three aircraft. The NASA Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) participated with a payload consisting of a Fourier Transform Spectrometer (FTS) and an in situ methane sensor. Two imaging spectrometers provided assessment of optical and thermal infrared detection of methane plumes. The AVIRIS-next generation (AVIRIS-ng) sensor has been demonstrated for detection of atmospheric methane in the short wave infrared region, specifically using the absorption features at ~2.3 μm. Detection of methane in the thermal infrared region was evaluated by flying the Hyperspectral Thermal Emission Spectrometer (HyTES), retrievals which interrogate spectral features in the 7.5 to 8.5 μm region. Here we discuss preliminary results from the JPL activities during the RMOTC controlled release experiment, including capabilities of airborne sensors for total columnar atmospheric methane detection and comparison to results from ground measurements and dispersion models. Potential application areas for these remote sensing technologies include assessment of anthropogenic and natural methane sources over wide spatial scales that represent significant unconstrained factors to the global methane budget.

  13. Evaluating the combined effects of source zone mass release rates and aquifer heterogeneity on solute discharge uncertainty

    NASA Astrophysics Data System (ADS)

    de Barros, Felipe P. J.

    2018-07-01

    Quantifying the uncertainty in solute mass discharge at an environmentally sensitive location is key to assess the risks due to groundwater contamination. Solute mass fluxes are strongly affected by the spatial variability of hydrogeological properties as well as release conditions at the source zone. This paper provides a methodological framework to investigate the interaction between the ubiquitous heterogeneity of the hydraulic conductivity and the mass release rate at the source zone on the uncertainty of mass discharge. Through the use of perturbation theory, we derive analytical and semi-analytical expressions for the statistics of the solute mass discharge at a control plane in a three-dimensional aquifer while accounting for the solute mass release rates at the source. The derived solutions are limited to aquifers displaying low-to-mild heterogeneity. Results illustrate the significance of the source zone mass release rate in controlling the mass discharge uncertainty. The relative importance of the mass release rate on the mean solute discharge depends on the distance between the source and the control plane. On the other hand, we find that the solute release rate at the source zone has a strong impact on the variance of the mass discharge. Within a risk context, we also compute the peak mean discharge as a function of the parameters governing the spatial heterogeneity of the hydraulic conductivity field and mass release rates at the source zone. The proposed physically-based framework is application-oriented, computationally efficient and capable of propagating uncertainty from different parameters onto risk metrics. Furthermore, it can be used for preliminary screening purposes to guide site managers to perform system-level sensitivity analysis and better allocate resources.

  14. Earthquake Forecasting in Northeast India using Energy Blocked Model

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, D. K.

    2009-12-01

    In the present study, the cumulative seismic energy released by earthquakes (M ≥ 5) for a period 1897 to 2007 is analyzed for Northeast (NE) India. It is one of the most seismically active regions of the world. The occurrence of three great earthquakes like 1897 Shillong plateau earthquake (Mw= 8.7), 1934 Bihar Nepal earthquake with (Mw= 8.3) and 1950 Upper Assam earthquake (Mw= 8.7) signify the possibility of great earthquakes in future from this region. The regional seismicity map for the study region is prepared by plotting the earthquake data for the period 1897 to 2007 from the source like USGS,ISC catalogs, GCMT database, Indian Meteorological department (IMD). Based on the geology, tectonic and seismicity the study region is classified into three source zones such as Zone 1: Arakan-Yoma zone (AYZ), Zone 2: Himalayan Zone (HZ) and Zone 3: Shillong Plateau zone (SPZ). The Arakan-Yoma Range is characterized by the subduction zone, developed by the junction of the Indian Plate and the Eurasian Plate. It shows a dense clustering of earthquake events and the 1908 eastern boundary earthquake. The Himalayan tectonic zone depicts the subduction zone, and the Assam syntaxis. This zone suffered by the great earthquakes like the 1950 Assam, 1934 Bihar and the 1951 Upper Himalayan earthquakes with Mw > 8. The Shillong Plateau zone was affected by major faults like the Dauki fault and exhibits its own style of the prominent tectonic features. The seismicity and hazard potential of Shillong Plateau is distinct from the Himalayan thrust. Using energy blocked model by Tsuboi, the forecasting of major earthquakes for each source zone is estimated. As per the energy blocked model, the supply of energy for potential earthquakes in an area is remarkably uniform with respect to time and the difference between the supply energy and cumulative energy released for a span of time, is a good indicator of energy blocked and can be utilized for the forecasting of major earthquakes. The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.

  15. Subsurface iceberg melt key to Greenland fjord freshwater budget

    NASA Astrophysics Data System (ADS)

    Moon, T.; Sutherland, D. A.; Carroll, D.; Felikson, D.; Kehrl, L.; Straneo, F.

    2018-01-01

    Liquid freshwater fluxes from the Greenland ice sheet affect ocean water properties and circulation on local, regional and basin-wide scales, with associated biosphere effects. The exact impact, however, depends on the volume, timing and location of freshwater releases, which are poorly known. In particular, the transformation of icebergs, which make up roughly 30-50% of the loss of the ice-sheet mass to liquid freshwater, is not well understood. Here we estimate the spatial and temporal distribution of the freshwater flux for the Helheim-Sermilik glacier-fjord system in southeast Greenland using an iceberg-melt model that resolves the subsurface iceberg melt. By estimating seasonal variations in all the freshwater sources, we confirm quantitatively that iceberg melt is the largest annual freshwater source in this system type. We also show that 68-78% of the iceberg melt is released below a depth of 20 m and, seasonally, about 40-100% of that melt is likely to remain at depth, in contrast with the usual model assumptions. Iceberg melt also peaks two months after all the other freshwater sources peak. Our methods provide a framework to assess individual freshwater sources in any tidewater system, and our results are particularly applicable to coastal regions with a high solid-ice discharge in Greenland.

  16. Acetylcholine Receptors in Model Membranes: Structure/Function Correlates.

    DTIC Science & Technology

    1985-12-01

    8217-ASSIFICAT1CN’r.OlrNC7-..OINC 6. 04STPl3U7lCh STATE)III (a. -,41. Revlon) Approved for public release;, distribution unlimited D T 18. SUPPLENENTARY NOTES *Annual...of California, San Diego B-019 La Jolla, California 92093 Approved for public release; distribution unlimited The findings in this report are not to be...electrodes E-255 and E 206 (In Vivo Metric Systems Metric Systems, Healdsburg, CA). DC source ( Omnical 2001, WPI Instruments, New Haven, CT). RACAL

  17. VizieR Online Data Catalog: The Chandra Source Catalog, Release 1.1 (Evans+ 2012)

    NASA Astrophysics Data System (ADS)

    Evans, I. N.; Primini, F. A.; Glotfelty, C. S.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G.; Grier, J. D.; Hain, R. M.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Kashyap, V. L.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Mossman, A. E.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2014-01-01

    This version of the catalog is release 1.1. It includes the information contained in release 1.0.1, plus point and compact source data extracted from HRC imaging observations, and catch-up ACIS observations released publicly prior to the end of 2009. (1 data file).

  18. Identifying Patterns in the Weather of Europe for Source Term Estimation

    NASA Astrophysics Data System (ADS)

    Klampanos, Iraklis; Pappas, Charalambos; Andronopoulos, Spyros; Davvetas, Athanasios; Ikonomopoulos, Andreas; Karkaletsis, Vangelis

    2017-04-01

    During emergencies that involve the release of hazardous substances into the atmosphere the potential health effects on the human population and the environment are of primary concern. Such events have occurred in the past, most notably involving radioactive and toxic substances. Examples of radioactive release events include the Chernobyl accident in 1986, as well as the more recent Fukushima Daiichi accident in 2011. Often, the release of dangerous substances in the atmosphere is detected at locations different from the release origin. The objective of this work is the rapid estimation of such unknown sources shortly after the detection of dangerous substances in the atmosphere, with an initial focus on nuclear or radiological releases. Typically, after the detection of a radioactive substance in the atmosphere indicating the occurrence of an unknown release, the source location is estimated via inverse modelling. However, depending on factors such as the spatial resolution desired, traditional inverse modelling can be computationally time-consuming. This is especially true for cases where complex topography and weather conditions are involved and can therefore be problematic when timing is critical. Making use of machine learning techniques and the Big Data Europe platform1, our approach moves the bulk of the computation before any such event taking place, therefore allowing for rapid initial, albeit rougher, estimations regarding the source location. Our proposed approach is based on the automatic identification of weather patterns within the European continent. Identifying weather patterns has long been an active research field. Our case is differentiated by the fact that it focuses on plume dispersion patterns and these meteorological variables that affect dispersion the most. For a small set of recurrent weather patterns, we simulate hypothetical radioactive releases from a pre-known set of nuclear reactor locations and for different substance and temporal parameters, using the Java flavour of the Euratom-supported funded RODOS (Real-time On-line DecisiOn Support) system2 for off-site emergency management after nuclear accidents. Once dispersions have been pre-computed, and immediately after a detected release, the currently observed weather can be matched to the derived weather classes. Since each weather class corresponds to a different plume dispersion pattern, the closest classes to an unseen weather sample, say the current weather, are the most likely to lead us to the release origin. In addressing the above problem, we make use of multiple years of weather reanalysis data from NCAR's version3 of ECMWF's ERA-Interim4. To derive useful weather classes, we evaluate several algorithms, ranging from straightforward unsupervised clustering to more complex methods, including relevant neural-network algorithms, on multiple variables. Variables and feature sets, clustering algorithms and evaluation approaches are all dealt with and presented experimentally. The Big Data Europe platform allows for the implementation and execution of the above tasks in the cloud, in a scalable, robust and efficient way.

  19. Does Don Fisher's high-pressure manifold model account for phloem transport and resource partitioning?

    PubMed Central

    Patrick, John W.

    2013-01-01

    The pressure flow model of phloem transport envisaged by Münch (1930) has gained wide acceptance. Recently, however, the model has been questioned on structural and physiological grounds. For instance, sub-structures of sieve elements may reduce their hydraulic conductances to levels that impede flow rates of phloem sap and observed magnitudes of pressure gradients to drive flow along sieve tubes could be inadequate in tall trees. A variant of the Münch pressure flow model, the high-pressure manifold model of phloem transport introduced by Donald Fisher may serve to reconcile at least some of these questions. To this end, key predicted features of the high-pressure manifold model of phloem transport are evaluated against current knowledge of the physiology of phloem transport. These features include: (1) An absence of significant gradients in axial hydrostatic pressure in sieve elements from collection to release phloem accompanied by transport properties of sieve elements that underpin this outcome; (2) Symplasmic pathways of phloem unloading into sink organs impose a major constraint over bulk flow rates of resources translocated through the source-path-sink system; (3) Hydraulic conductances of plasmodesmata, linking sieve elements with surrounding phloem parenchyma cells, are sufficient to support and also regulate bulk flow rates exiting from sieve elements of release phloem. The review identifies strong circumstantial evidence that resource transport through the source-path-sink system is consistent with the high-pressure manifold model of phloem transport. The analysis then moves to exploring mechanisms that may link demand for resources, by cells of meristematic and expansion/storage sinks, with plasmodesmal conductances of release phloem. The review concludes with a brief discussion of how these mechanisms may offer novel opportunities to enhance crop biomass yields. PMID:23802003

  20. Mapping Site Remediation with Electrical Resistivity Tomography Explored via Coupled-Model Simulations

    NASA Astrophysics Data System (ADS)

    Power, C.; Gerhard, J. I.; Tsourlos, P.; Giannopoulos, A.

    2011-12-01

    Remediation programs for sites contaminated with dense non-aqueous phase liquids (DNAPLs) would benefit from an ability to non-intrusively map the evolving volume and extent of the DNAPL source zone. Electrical resistivity tomography (ERT) is a well-established geophysical tool, widely used outside the remediation industry, that has significant potential for mapping DNAPL source zones. However, that potential has not been realized due to challenges in data interpretation from contaminated sites - in either a qualitative or quantitative way. The objective of this study is to evaluate the potential of ERT to map realistic, evolving DNAPL source zones within complex subsurface environments during remedial efforts. For this purpose, a novel coupled model was developed that integrates a multiphase flow model (DNAPL3D-MT), which generates realistic DNAPL release scenarios, with 3DINV, an ERT model which calculates the corresponding resistivity response. This presentation will describe the developed model coupling methodology, which integrates published petrophysical relationships to generate an electrical resistivity field that accounts for both the spatial heterogeneity of subsurface soils and the evolving spatial distribution of fluids (including permeability, porosity, clay content and air/water/DNAPL saturation). It will also present an example in which the coupled model was employed to explore the ability of ERT to track the remediation of a DNAPL source zone. A field-scale, three-dimensional release of chlorinated solvent DNAPL into heterogeneous clayey sand was simulated, including the subsurface migration and subsequent removal of the DNAPL source zone via dissolution in groundwater. Periodic surveys of this site via ERT applied at the surface were then simulated and inversion programs were used to calculate the subsurface distribution of electrical properties. This presentation will summarize this approach and its potential as a research tool exploring the range of site conditions under which ERT may prove useful in aiding DNAPL site remediation. Moreover, it is expected to provide a cost-effective avenue to test optimum ERT data acquisition, inversion and interpretative tools at contaminated sites.

  1. Radiological assessment. A textbook on environmental dose analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Till, J.E.; Meyer, H.R.

    1983-09-01

    Radiological assessment is the quantitative process of estimating the consequences to humans resulting from the release of radionuclides to the biosphere. It is a multidisciplinary subject requiring the expertise of a number of individuals in order to predict source terms, describe environmental transport, calculate internal and external dose, and extrapolate dose to health effects. Up to this time there has been available no comprehensive book describing, on a uniform and comprehensive level, the techniques and models used in radiological assessment. Radiological Assessment is based on material presented at the 1980 Health Physics Society Summer School held in Seattle, Washington. Themore » material has been expanded and edited to make it comprehensive in scope and useful as a text. Topics covered include (1) source terms for nuclear facilities and Medical and Industrial sites; (2) transport of radionuclides in the atmosphere; (3) transport of radionuclides in surface waters; (4) transport of radionuclides in groundwater; (5) terrestrial and aquatic food chain pathways; (6) reference man; a system for internal dose calculations; (7) internal dosimetry; (8) external dosimetry; (9) models for special-case radionuclides; (10) calculation of health effects in irradiated populations; (11) evaluation of uncertainties in environmental radiological assessment models; (12) regulatory standards for environmental releases of radionuclides; (13) development of computer codes for radiological assessment; and (14) assessment of accidental releases of radionuclides.« less

  2. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  3. The Chandra Source Catalog 2.0

    NASA Astrophysics Data System (ADS)

    Evans, Ian N.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Miller, Joseph; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    The current version of the Chandra Source Catalog (CSC) continues to be well utilized by the astronomical community. Usage over the past year has continued to average more than 15,000 searches per month. Version 1.1 of the CSC, released in 2010, includes properties and data for 158,071 detections, corresponding to 106,586 distinct X-ray sources on the sky. The second major release of the catalog, CSC 2.0, will be made available to the user community in early 2018, and preliminary lists of detections and sources are available now. Release 2.0 will roughly triple the size of the current version of the catalog to an estimated 375,000 detections, corresponding to ~315,000 unique X-ray sources. Compared to release 1.1, the limiting sensitivity for compact sources in CSC 2.0 is significantly enhanced. This improvement is achieved by using a two-stage approach that involves stacking (co-adding) multiple observations of the same field prior to source detection, and then using an improved source detection approach that enables us to detect point source down to ~5 net counts on-axis for exposures shorter than ~15 ks. In addition to enhanced source detection capabilities, improvements to the Bayesian aperture photometry code included in release 2.0 provides robust photometric probability density functions (PDFs) in crowded fields even for low count detections. All post-aperture photometry properties (e.g., hardness ratios, source variability) work directly from the PDFs in release 2.0. CSC 2.0 also adds a Bayesian Blocks analysis of the multi-band aperture photometry PDFs to identify multiple observations of the same source that have similar photometric properties, and therefore can be analyzed simultaneously to improve S/N.We briefly describe these and other updates that significantly enhance the scientific utility of CSC 2.0 when compared to the earlier catalog release.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  4. Treatment of solid tumors by interstitial release of recoiling short-lived alpha emitters

    NASA Astrophysics Data System (ADS)

    Arazi, L.; Cooks, T.; Schmidt, M.; Keisari, Y.; Kelson, I.

    2007-08-01

    A new method utilizing alpha particles to treat solid tumors is presented. Tumors are treated with interstitial radioactive sources which continually release short-lived alpha emitting atoms from their surface. The atoms disperse inside the tumor, delivering a high dose through their alpha decays. We implement this scheme using thin wire sources impregnated with 224Ra, which release by recoil 220Rn, 216Po and 212Pb atoms. This work aims to demonstrate the feasibility of our method by measuring the activity patterns of the released radionuclides in experimental tumors. Sources carrying 224Ra activities in the range 10-130 kBq were used in experiments on murine squamous cell carcinoma tumors. These included gamma spectroscopy of the dissected tumors and major organs, Fuji-plate autoradiography of histological tumor sections and tissue damage detection by Hematoxylin-Eosin staining. The measurements focused on 212Pb and 212Bi. The 220Rn/216Po distribution was treated theoretically using a simple diffusion model. A simplified scheme was used to convert measured 212Pb activities to absorbed dose estimates. Both physical and histological measurements confirmed the formation of a 5-7 mm diameter necrotic region receiving a therapeutic alpha-particle dose around the source. The necrotic regions shape closely corresponded to the measured activity patterns. 212Pb was found to leave the tumor through the blood at a rate which decreased with tumor mass. Our results suggest that the proposed method, termed DART (diffusing alpha-emitters radiation therapy), may potentially be useful for the treatment of human patients.

  5. Organic aerosol sources an partitioning in CMAQv5.2

    EPA Science Inventory

    We describe a major CMAQ update, available in version 5.2, which explicitly treats the semivolatile mass transfer of primary organic aerosol compounds, in agreement with available field and laboratory observations. Until this model release, CMAQ has considered these compounds to ...

  6. R-LINE: A Line Source Dispersion Model for Near-Surface Releases

    EPA Science Inventory

    Based on Science Advisory Board and the National Research Councilrecommendations, EPA-ORD initiated research on near-road air quality andhealth effects. Field measurements indicated that exposures to traffic-emitted air pollutants near roads can be influenced by complexities of r...

  7. Update to An Inventory of Sources and Environmental Releases of Dioxin-Like Compounds in the United States for the Years 1987, 1995, and 2000 (2013, External Review Draft)

    EPA Science Inventory

    In 2006, EPA published an inventory of sources and environmental releases of dioxin-like compounds in the United States. This draft report presents an update and revision to that dioxin source inventory. It also presents updated estimates of environmental releases of dioxin-like...

  8. FLORIDA TOWER FOOTPRINT EXPERIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WATSON,T.B.; DIETZ, R.N.; WILKE, R.

    2007-01-01

    The Florida Footprint experiments were a series of field programs in which perfluorocarbon tracers were released in different configurations centered on a flux tower to generate a data set that can be used to test transport and dispersion models. These models are used to determine the sources of the CO{sub 2} that cause the fluxes measured at eddy covariance towers. Experiments were conducted in a managed slash pine forest, 10 km northeast of Gainesville, Florida, in 2002, 2004, and 2006 and in atmospheric conditions that ranged from well mixed, to very stable, including the transition period between convective conditions atmore » midday to stable conditions after sun set. There were a total of 15 experiments. The characteristics of the PFTs, details of sampling and analysis methods, quality control measures, and analytical statistics including confidence limits are presented. Details of the field programs including tracer release rates, tracer source configurations, and configuration of the samplers are discussed. The result of this experiment is a high quality, well documented tracer and meteorological data set that can be used to improve and validate canopy dispersion models.« less

  9. Role of perisynaptic parameters in neurotransmitter homeostasis - computational study of a general synapse

    PubMed Central

    Pendyam, Sandeep; Mohan, Ashwin; Kalivas, Peter W.; Nair, Satish S.

    2015-01-01

    Extracellular neurotransmitter concentrations vary over a wide range depending on the type of neurotransmitter and location in the brain. Neurotransmitter homeostasis near a synapse is achieved by a balance of several mechanisms including vesicular release from the presynapse, diffusion, uptake by transporters, non-synaptic production, and regulation of release by autoreceptors. These mechanisms are also affected by the glia surrounding the synapse. However, the role of these mechanisms in achieving neurotransmitter homeostasis is not well understood. A biophysical modeling framework was proposed to reverse engineer glial configurations and parameters related to homeostasis for synapses that support a range of neurotransmitter gradients. Model experiments reveal that synapses with extracellular neurotransmitter concentrations in the micromolar range require non-synaptic neurotransmitter sources and tight synaptic isolation by extracellular glial formations. The model was used to identify the role of perisynaptic parameters on neurotransmitter homeostasis, and to propose glial configurations that could support different levels of extracellular neurotransmitter concentrations. Ranking the parameters based on their effect on neurotransmitter homeostasis, non-synaptic sources were found to be the most important followed by transporter concentration and diffusion coefficient. PMID:22460547

  10. Stress concentration on Intraplate Seismicity: Numerical Modeling of Slab-released Fluids in the New Madrid Seismic Zone

    NASA Astrophysics Data System (ADS)

    Saxena, A.; Choi, E.; Powell, C. A.

    2017-12-01

    The mechanism behind the seismicity of the New Madrid Seismic Zone (NMSZ), the major intraplate earthquake source in the Central and Eastern US (CEUS), is still debated but new insights are being provided by recent tomographic studies involving USArray. A high-resolution tomography study by Nyamwandha et al. (2016) in the NMSZ indicates the presence of low (3 % - 5 %) upper mantle Vp and Vs anomalies in the depth range 100 to 250 km. The elevated anomaly magnitudes are difficult to explain by temperature alone. As the low-velocity anomalies beneath the northeast China are attributed to fluids released from the stagnant Pacific slab, water released from the stagnant Laramide Slab, presently located at transition zone depths beneath the CEUS might be contributing to the low velocity features in this region's upper mantle. Here, we investigate the potential impact of the slab-released fluids on the stresses at seismogenic depths using numerical modeling. We convert the tomographic results into temperature field under various assumed values of spatially uniform water content. In more realistic cases, water content is added only when the converted temperature exceeds the melting temperature of olivine. Viscosities are then computed based on the temperature and water content and given to our geodynamic models created by Pylith, an open source software for crustal dynamics. The model results show that increasing water content weakens the upper mantle more than temperature alone and thus elevates the differential stress in the upper crust. These results can better explain the tomography results and seismicity without invoking melting. We also invert the tomography results for volume fraction of orthopyroxene and temperature and compare the resultant stresses with those for pure olivine. To enhance the reproducibility, selected models in this study will be made available in the form of sharable and reproducible packages enabled by EarthCube Building block project, GeoTrust.

  11. Thermodynamics of various F420 coenzyme models as sources of electrons, hydride ions, hydrogen atoms and protons in acetonitrile.

    PubMed

    Xia, Ke; Shen, Guang-Bin; Zhu, Xiao-Qing

    2015-06-14

    32 F420 coenzyme models with alkylation of the three different N atoms (N1, N3 and N10) in the core structure (XFH(-)) were designed and synthesized and the thermodynamic driving forces (defined in terms of the molar enthalpy changes or the standard redox potentials in this work) of the 32 XFH(-) releasing hydride ions, hydrogen atoms and electrons, the thermodynamic driving forces of the 32 XFH˙ releasing protons and hydrogen atoms and the thermodynamic driving forces of XF(-)˙ releasing electrons in acetonitrile were determined using titration calorimetry and electrochemical methods. The effects of the methyl group at N1, N3 and N10 and a negative charge on N1 and N10 atoms on the six thermodynamic driving forces of the F420 coenzyme models and their related reaction intermediates were examined; the results show that seating arrangements of the methyl group and the negative charge have remarkably different effects on the thermodynamic properties of the F420 coenzyme models and their related reaction intermediates. The effects of the substituents at C7 and C8 on the six thermodynamic driving forces of the F420 coenzyme models and their related reaction intermediates were also examined; the results show that the substituents at C7 and C8 have good Hammett linear free energy relationships with the six thermodynamic parameters. Meanwhile, a reasonable determination of possible reactions between members of the F420 family and NADH family in vivo was given according to a thermodynamic analysis platform constructed using the elementary step thermodynamic parameter of F420 coenzyme model 2FH(-) and NADH model MNAH releasing hydride ions in acetonitrile. The information disclosed in this work can not only fill a gap in the chemical thermodynamics of F420 coenzyme models as a class of very important organic sources of electrons, hydride ions, hydrogen atoms and protons, but also strongly promote the fast development of the chemistry and applications of F420 coenzyme.

  12. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.

  13. Monte Carlo Model Insights into the Lunar Sodium Exosphere

    NASA Technical Reports Server (NTRS)

    Hurley, Dana M.; Killen, R. M.; Sarantos, M.

    2012-01-01

    Sodium in the lunar exosphere is released from the lunar regolith by several mechanisms. These mechanisms include photon stimulated desorption (PSD), impact vaporization, electron stimulated desorption, and ion sputtering. Usually, PSD dominates; however, transient events can temporarily enhance other release mechanisms so that they are dominant. Examples of transient events include meteor showers and coronal mass ejections. The interaction between sodium and the regolith is important in determining the density and spatial distribution of sodium in the lunar exosphere. The temperature at which sodium sticks to the surface is one factor. In addition, the amount of thermal accommodation during the encounter between the sodium atom and the surface affects the exospheric distribution. Finally, the fraction of particles that are stuck when the surface is cold that are rereleased when the surface warms up also affects the exospheric density. In [1], we showed the "ambient" sodium exosphere from Monte Carlo modeling with a fixed source rate and fixed surface interaction parameters. We compared the enhancement when a CME passes the Moon to the ambient conditions. Here, we compare model results to data in order to determine the source rates and surface interaction parameters that provide the best fit of the model to the data.

  14. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  15. Atmospheric modeling of Mars CH4 subsurface clathrates releases mimicking SAM and 2003 Earth-based detections

    NASA Astrophysics Data System (ADS)

    Pla-Garcia, Jorge

    2017-10-01

    The aim of this work is to establish the amount of mixing during all martian seasons to test whether CH4 releases inside or outside of Gale crater are consistent with MSL-SAM observations. Several modeling scenarios were configured, including instantaneous and steady releases, both inside and outside the crater. A simulation to mimic the 2003 Earth-based detections (Mumma et al. 2009 or M09) was also performed. In the instantaneous release inside Gale experiments, Ls270 was shown to be the faster mixing season when air within and outside the crater was well mixed: all tracer mass inside the crater is diluted after just 8 hours. The mixing of near surface crater air with the external environment in the rest of the year is potentially rapid but slower than Ls270. In the instantaneous release outside Gale (NW) experiment, in just 12 hours the CH4 that makes it to the MSL landing location is diluted by six orders of magnitude. The timescale of mixing in the model is on the order of 1 sol regardless of season. The duration of the CH4 peak observed by SAM is 100 sols. Therefore there is a steady release inside the crater, or there is a large magnitude steady release outside the crater. In the steady release Gale experiments, CH4 flux rate from ground is 1.8 kg m-2 s-1 (Gloesener et al. 2017) and it is not predictive. In these experiments, ~200 times lower CH4 values detected by SAM are modeled around MSL location. There are CH4 concentration variations of orders of magnitude depending on the hour, so timing of SAM measurements is important. With a larger (but further away) outside crater release area compared to inside, similar CH4 values around MSL are modeled, so distance to source is important. In the steady experiments mimicking M09 detection release area, only 12 times lower CH4 values detected by SAM are modeled around MSL. The highest value in the M09 modeled scenario (0.6 ppbv) is reached in Ls270. This value is the highest of all modeled experiments. With our initial conditions, SAM should not be able to detect CH4, but if we multiply flux by 12, increase the release area or move it closer to MSL (or all of above), it may be possible to get CH4 values that SAM could detect regardless where it comes from.

  16. CrossWater - Modelling micropollutant loads from different sources in the Rhine basin

    NASA Astrophysics Data System (ADS)

    Moser, Andreas; Bader, Hans-Peter; Scheidegger, Ruth; Honti, Mark; Stamm, Christian

    2017-04-01

    The pressure on rivers from micropollutants (MPs) originating from various sources is a growing environmental issue that requires political regulations. The challenges for the water management are numerous, particularly for international water basins. Spatial knowledge of MP sources and the water quality are prerequisites for an effective water quality policy. In this study within the Rhine basin, the spatial patterns of MP sources and concentrations from different use classes of chemicals are investigated with a mass flow analysis and compared to the territorial jurisdictions that shape the spatial arrangement of water management. The source area of MPs depends on the specific use of a compound. Here, we focus on i) herbicides from agricultural land use, ii) biocides from material protection on buildings and iii) human pharmaceuticals from households. The total mass of MPs available for release to the stream network is estimated from statistics of sales and consumption data. Based on GIS data of agricultural land use, vector data of buildings, wastewater treatment plant (WWTP) locations, respectively, the available mass of MPs is spatially distributed to the subcatchments of the Rhine basin. The modelling of concentrations in the rivers consists of two principal components. The first component - the substance transfer module - simulates the actual release of MPs to the stream network. This transfer is affected by many factors rendering spatial distributed modeling a serious challenge. Here we use a parsimonious approach that tries to represent the first order controls of the transfer processes. We use empirical loss rates relating concentration to river discharge for agricultural herbicides and to precipitation for biocides. For the pharmaceuticals the release is coupled to the human metabolism rates and elimination rates in WWTP. The prediction uncertainty was quantified by an error model that takes the seasonality of the herbicide input into account. The second component - the routing module - links contribution of the subcatchments and represents the in-stream transport and fate processes of the substances. The substance transfer module was calibrated using field studies providing simultaneously data on application amounts of substances and on losses to the rivers. However the predictive uncertainty was often large because of some mismatches of high peaks. The model was subsequently validated with independent data from several comprehensive sampling campaigns in Switzerland. Despite acceptable performance in general, some compounds were poorly simulated for some catchments. Data inspection suggests that uncertainty about timing and application amounts are a major limitation. Finally, the calibrated model is used to simulate concentration time series for the Rhine and its main tributaries. The corresponding results will be presented.

  17. High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters

    DTIC Science & Technology

    2015-10-01

    dispersion depends on the Riemann solver • Variables are allowed to be discontinuous at the cell interfaces Advantages - Method is conservative...release; distribution unlimited Discontinuous Galerkin (2) • Riemann problems are solved at each interface to compute fluxes • The source of dissipation

  18. Application of a Visco-Plastic Continuum Model to the Modeling of Near-Source Phenomenology and its Implications on Close-In Seismic Observables

    NASA Astrophysics Data System (ADS)

    Rougier, E.; Knight, E. E.

    2015-12-01

    The Source Physics Experiments (SPE) is a project funded by the U.S. Department of Energy at the National Nuclear Security Site. The project consists of a series of underground explosive tests designed to gain more insight on the generation and propagation of seismic energy from underground explosions in hard rock media, granite. Until now, four tests (SPE-1, SPE-2, SPE-3 and SPE-4Prime) with yields ranging from 87 kg to 1000 kg have been conducted in the same borehole. The generation and propagation of seismic waves is heavily influenced by the different damage mechanisms occurring at different ranges from the explosive source. These damage mechanisms include pore crushing, compressive (shear) damage, joint damage, spallation and fracture and fragmentation, etc. Understanding these mechanisms and how they interact with each other is essential to the interpretation of the characteristics of close-in seismic observables. Recent observations demonstrate that, for relatively small and shallow chemical explosions in granite, such as SPE-1, -2 and -3, the formation of a cavity around the working point is not the main mechanism responsible for the release of seismic moment. Shear dilatancy (bulking occurring as a consequence of compressive damage) of the medium around the source has been proposed as an alternative damage mechanism that explains the seismic moment release observed in the experiments. In this work, the interaction between cavity formation and bulking is investigated via a series of computer simulations for the SPE-2 event. The simulations are conducted using a newly developed material model, called AZ_Frac. AZ_Frac is a continuum-based-visco-plastic strain-rate-dependent material model. One of its key features is its ability to describe continuum fracture processes, while properly handling anisotropic material characteristics. The implications of the near source numerical results on the close-in seismic quantities, such as reduced displacement potentials and source spectra are presented.

  19. Quantitative Assessment of Detection Frequency for the INL Ambient Air Monitoring Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sondrup, A. Jeffrey; Rood, Arthur S.

    A quantitative assessment of the Idaho National Laboratory (INL) air monitoring network was performed using frequency of detection as the performance metric. The INL air monitoring network consists of 37 low-volume air samplers in 31 different locations. Twenty of the samplers are located on INL (onsite) and 17 are located off INL (offsite). Detection frequencies were calculated using both BEA and ESER laboratory minimum detectable activity (MDA) levels. The CALPUFF Lagrangian puff dispersion model, coupled with 1 year of meteorological data, was used to calculate time-integrated concentrations at sampler locations for a 1-hour release of unit activity (1 Ci) formore » every hour of the year. The unit-activity time-integrated concentration (TICu) values were calculated at all samplers for releases from eight INL facilities. The TICu values were then scaled and integrated for a given release quantity and release duration. All facilities modeled a ground-level release emanating either from the center of the facility or at a point where significant emissions are possible. In addition to ground-level releases, three existing stacks at the Advanced Test Reactor Complex, Idaho Nuclear Technology and Engineering Center, and Material and Fuels Complex were also modeled. Meteorological data from the 35 stations comprising the INL Mesonet network, data from the Idaho Falls Regional airport, upper air data from the Boise airport, and three-dimensional gridded data from the weather research forecasting model were used for modeling. Three representative radionuclides identified as key radionuclides in INL’s annual National Emission Standards for Hazardous Air Pollutants evaluations were considered for the frequency of detection analysis: Cs-137 (beta-gamma emitter), Pu-239 (alpha emitter), and Sr-90 (beta emitter). Source-specific release quantities were calculated for each radionuclide, such that the maximum inhalation dose at any publicly accessible sampler or the National Emission Standards for Hazardous Air Pollutants maximum exposed individual location (i.e., Frenchman’s Cabin) was no more than 0.1 mrem yr–1 (i.e., 1% of the 10 mrem yr–1 standard). Detection frequencies were calculated separately for the onsite and offsite monitoring network. As expected, detection frequencies were generally less for the offsite sampling network compared to the onsite network. Overall, the monitoring network is very effective at detecting the potential releases of Cs-137 or Sr-90 from all sources/facilities using either the ESER or BEA MDAs. The network was less effective at detecting releases of Pu-239. Maximum detection frequencies for Pu-239 using ESER MDAs ranged from 27.4 to 100% for onsite samplers and 3 to 80% for offsite samplers. Using BEA MDAs, the maximum detection frequencies for Pu-239 ranged from 2.1 to 100% for onsite samplers and 0 to 5.9% for offsite samplers. The only release that was not detected by any of the samplers under any conditions was a release of Pu-239 from the Idaho Nuclear Technology and Engineering Center main stack (CPP-708). The methodology described in this report could be used to improve sampler placement and detection frequency, provided clear performance objectives are defined.« less

  20. Development of an on-line source-tagged model for sulfate, nitrate and ammonium: A modeling study for highly polluted periods in Shanghai, China.

    PubMed

    Wu, Jian-Bin; Wang, Zifa; Wang, Qian; Li, Jie; Xu, Jianming; Chen, HuanSheng; Ge, Baozhu; Zhou, Guangqiang; Chang, Luyu

    2017-02-01

    An on-line source-tagged model coupled with an air quality model (Nested Air Quality Prediction Model System, NAQPMS) was applied to estimate source contributions of primary and secondary sulfate, nitrate and ammonium (SNA) during a representative winter period in Shanghai. This source-tagged model system could simultaneously track spatial and temporal sources of SNA, which were apportioned to their respective primary precursors in a simulation run. The results indicate that in the study period, local emissions in Shanghai accounted for over 20% of SNA contributions and that Jiangsu and Shandong were the two major non-local sources. In particular, non-local emissions had higher contributions during recorded pollution periods. This suggests that the transportation of pollutants plays a key role in air pollution in Shanghai. The temporal contributions show that the emissions from the "current day" (emission contribution from the current day during which the model was simulating) contributed 60%-70% of the sulfate and ammonium concentrations but only 10%-20% of the nitrate concentration, while the previous days' contributions increased during the recorded pollution periods. Emissions that were released within three days contributed over 85% averagely for SNA in January 2013. To evaluate the source-tagged model system, the results were compared by sensitivity analysis (emission perturbation of -30%) and backward trajectory analysis. The consistency of the comparison results indicated that the source-tagged model system can track sources of SNA with reasonable accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Predicting Atmospheric Releases from the September 3, 2017 North Korean Event

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Simpson, M. D.; Glascoe, L. G.

    2017-12-01

    Underground nuclear explosions produce radionuclides that can be vented to the atmosphere and transported to International Monitoring System (IMS) measurement stations. Although a positive atmospheric detection from North Korea's declared test on September 3, 2017 has not been reported at any IMS station through early October, atmospheric transport models can predict when and where detections may arise and provide valuable information to optimize air collection strategies. We present predictive atmospheric transport simulations initiated in the early days after the event. Wind fields were simulated with the Weather Research and Forecast model and used to transport air tracers from an ensemble of releases in the FLEXPART dispersion model. If early venting had occurred, the simulations suggested that detections were possible at the IMS station in Takasaki, Japan. On-going and future research efforts associated with nuclear testing are focused on quantifying meteorological uncertainty, simulating releases in complex terrain, and developing new statistical methods for source attribution. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and is released as LLNL-ABS-740341.

  2. Slab2 - Updated Subduction Zone Geometries and Modeling Tools

    NASA Astrophysics Data System (ADS)

    Moore, G.; Hayes, G. P.; Portner, D. E.; Furtney, M.; Flamme, H. E.; Hearne, M. G.

    2017-12-01

    The U.S. Geological Survey database of global subduction zone geometries (Slab1.0), is a highly utilized dataset that has been applied to a wide range of geophysical problems. In 2017, these models have been improved and expanded upon as part of the Slab2 modeling effort. With a new data driven approach that can be applied to a broader range of tectonic settings and geophysical data sets, we have generated a model set that will serve as a more comprehensive, reliable, and reproducible resource for three-dimensional slab geometries at all of the world's convergent margins. The newly developed framework of Slab2 is guided by: (1) a large integrated dataset, consisting of a variety of geophysical sources (e.g., earthquake hypocenters, moment tensors, active-source seismic survey images of the shallow slab, tomography models, receiver functions, bathymetry, trench ages, and sediment thickness information); (2) a dynamic filtering scheme aimed at constraining incorporated seismicity to only slab related events; (3) a 3-D data interpolation approach which captures both high resolution shallow geometries and instances of slab rollback and overlap at depth; and (4) an algorithm which incorporates uncertainties of contributing datasets to identify the most probable surface depth over the extent of each subduction zone. Further layers will also be added to the base geometry dataset, such as historic moment release, earthquake tectonic providence, and interface coupling. Along with access to several queryable data formats, all components have been wrapped into an open source library in Python, such that suites of updated models can be released as further data becomes available. This presentation will discuss the extent of Slab2 development, as well as the current availability of the model and modeling tools.

  3. Uncertainty of inhalation dose coefficients for representative physical and chemical forms of iodine-131

    NASA Astrophysics Data System (ADS)

    Harvey, Richard Paul, III

    Releases of radioactive material have occurred at various Department of Energy (DOE) weapons facilities and facilities associated with the nuclear fuel cycle in the generation of electricity. Many different radionuclides have been released to the environment with resulting exposure of the population to these various sources of radioactivity. Radioiodine has been released from a number of these facilities and is a potential public health concern due to its physical and biological characteristics. Iodine exists as various isotopes, but our focus is on 131I due to its relatively long half-life, its prevalence in atmospheric releases and its contribution to offsite dose. The assumption of physical and chemical form is speculated to have a profound impact on the deposition of radioactive material within the respiratory tract. In the case of iodine, it has been shown that more than one type of physical and chemical form may be released to, or exist in, the environment; iodine can exist as a particle or as a gas. The gaseous species can be further segregated based on chemical form: elemental, inorganic, and organic iodides. Chemical compounds in each class are assumed to behave similarly with respect to biochemistry. Studies at Oak Ridge National Laboratories have demonstrated that 131I is released as a particulate, as well as in elemental, inorganic and organic chemical form. The internal dose estimate from 131I may be very different depending on the effect that chemical form has on fractional deposition, gas uptake, and clearance in the respiratory tract. There are many sources of uncertainty in the estimation of environmental dose including source term, airborne transport of radionuclides, and internal dosimetry. Knowledge of uncertainty in internal dosimetry is essential for estimating dose to members of the public and for determining total uncertainty in dose estimation. Important calculational steps in any lung model is regional estimation of deposition fractions and gas uptake of radionuclides in various regions of the lung. Variability in regional radionuclide deposition within lung compartments may significantly contribute to the overall uncertainty of the lung model. The uncertainty of lung deposition and biological clearance is dependent upon physiological and anatomical parameters of individuals as well as characteristic parameters of the particulate material. These parameters introduce uncertainty into internal dose estimates due to their inherent variability. Anatomical and physiological input parameters are age and gender dependent. This work has determined the uncertainty in internal dose estimates and the sensitive parameters involved in modeling particulate deposition and gas uptake of different physical and chemical forms of 131I with age and gender dependencies.

  4. Source, dispersion and combustion modelling of an accidental release of hydrogen in an urban environment.

    PubMed

    Venetsanos, A G; Huld, T; Adams, P; Bartzis, J G

    2003-12-12

    Hydrogen is likely to be the most important future energy carrier, for many stationary and mobile applications, with the potential to make significant reductions in greenhouse gas emissions especially if renewable primary energy sources are used to produce the hydrogen. A safe transition to the use of hydrogen by members of the general public requires that the safety issues associated with hydrogen applications have to be investigated and fully understood. In order to assess the risks associated with hydrogen applications, its behaviour in realistic accident scenarios has to be predicted, allowing mitigating measures to be developed where necessary. A key factor in this process is predicting the release, dispersion and combustion of hydrogen in appropriate scenarios. This paper illustrates an application of CFD methods to the simulation of an actual hydrogen explosion. The explosion occurred on 3 March 1983 in a built up area of central Stockholm, Sweden, after the accidental release of approximately 13.5 kg of hydrogen from a rack of 18 interconnected 50 l industrial pressure vessels (200 bar working pressure) being transported by a delivery truck. Modelling of the source term, dispersion and combustion were undertaken separately using three different numerical tools, due to the differences in physics and scales between the different phenomena. Results from the dispersion calculations together with the official accident report were used to identify a possible ignition source and estimate the time at which ignition could have occurred. Ignition was estimated to occur 10s after the start of the release, coinciding with the time at which the maximum flammable hydrogen mass and cloud volume were found to occur (4.5 kg and 600 m(3), respectively). The subsequent simulation of the combustion adopts initial conditions for mean flow and turbulence from the dispersion simulations, and calculates the development of a fireball. This provides physical values, e.g. maximum overpressure and far-field overpressure that may be used as a comparison with the known accident details to give an indication of the validity of the models. The simulation results are consistent with both the reported near-field damage to buildings and persons and with the far-field damage to windows. The work was undertaken as part of the European Integrated Hydrogen Project-Phase 2 (EIHP2) with partial funding from the European Commission via the Fifth Framework Programme.

  5. Laser-induced disruption of systemically administered liposomes for targeted drug delivery

    NASA Astrophysics Data System (ADS)

    Mackanos, Mark A.; Larabi, Malika; Shinde, Rajesh; Simanovskii, Dmitrii M.; Guccione, Samira; Contag, Christopher H.

    2009-07-01

    Liposomal formulations of drugs have been shown to enhance drug efficacy by prolonging circulation time, increasing local concentration and reducing off-target effects. Controlled release from these formulations would increase their utility, and hyperthermia has been explored as a stimulus for targeted delivery of encapsulated drugs. Use of lasers as a thermal source could provide improved control over the release of the drug from the liposomes with minimal collateral tissue damage. Appropriate methods for assessing local release after systemic delivery would aid in testing and development of better formulations. We use in vivo bioluminescence imaging to investigate the spatiotemporal distribution of luciferin, used as a model small molecule, and demonstrate laser-induced release from liposomes in animal models after systemic delivery. These liposomes were tested for luciferin release between 37 and 45 °C in PBS and serum using bioluminescence measurements. In vivo studies were performed on transgenic reporter mice that express luciferase constitutively throughout the body, thus providing a noninvasive readout for controlled release following systemic delivery. An Nd:YLF laser was used (527 nm) to heat tissues and induce rupture of the intravenously delivered liposomes in target tissues. These data demonstrate laser-mediated control of small molecule delivery using thermally sensitive liposomal formulations.

  6. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.

  7. Ground deposition of liquid droplets released from a point source in the atmospheric surface layer

    NASA Astrophysics Data System (ADS)

    Panneton, Bernard

    1989-01-01

    A series of field experiments is presented in which the ground deposition of liquid droplets, 120 and 150 microns in diameter, released from a point source at 7 m above ground level, was measured. A detailed description of the experimental technique is provided, and the results are presented and compared to the predictions of a few models. A new rotating droplet generator is described. Droplets are produced by the forced breakup of capillary liquid jets and droplet coalescence is inhibited by the rotational motion of the spray head. The two dimensional deposition patterns are presented in the form of plots of contours of constant density, normalized arcwise distributions and crosswind integrated distributions. The arcwise distributions follow a Gaussian distribution whose standard deviation is evaluated using a modified Pasquill's technique. Models of the crosswind integrated deposit from Godson, Csanady, Walker, Bache and Sayer, and Wilson et al are evaluated. The results indicate that the Wilson et al random walk model is adequate for predicting the ground deposition of the 150 micron droplets. In one case, where the ratio of the droplet settling velocity to the mean wind speed was largest, Walker's model proved to be adequate. Otherwise, none of the models were acceptable in light of the experimental data.

  8. 40 CFR 61.24 - Annual reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS National Emission Standards for Radon...) Distances from the points of release to the nearest residence, school, business or office and the nearest... parameters for the computer models (e.g., meteorological data) and the source of these data. (8) Each report...

  9. The Chandra Source Catalog: Spectral Properties

    NASA Astrophysics Data System (ADS)

    Doe, Stephen; Siemiginowska, Aneta L.; Refsdal, Brian L.; Evans, Ian N.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) contains all sources identified from eight years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard) using the Bayesian algorithm (BEHR, Park et al. 2006). The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package, developed by the Chandra X-ray Center; see Freeman et al. 2001). Two models were fit to each source: an absorbed power law and a blackbody emission. The fitted parameter values for the power-law and blackbody models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy flux computed from the normalizations of predefined power-law and black-body models needed to match the observed net X-ray counts. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. This work is supported by NASA contract NAS8-03060 (CXC).

  10. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    NASA Astrophysics Data System (ADS)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  11. Tests and consequences of disk plus halo models of gamma-ray burst sources

    NASA Technical Reports Server (NTRS)

    Smith, I. A.

    1995-01-01

    The gamma-ray burst observations made by the Burst and Transient Source Experiment (BATSE) and by previous experiments are still consistent with a combined Galactic disk (or Galactic spiral arm) plus extended Galactic halo model. Testable predictions and consequences of the disk plus halo model are discussed here; tests performed on the expanded BATSE database in the future will constrain the allowed model parameters and may eventually rule out the disk plus halo model. Using examples, it is shown that if the halo has an appropriate edge, BATSE will never detect an anisotropic signal from the halo of the Andromeda galaxy. A prediction of the disk plus halo model is that the fraction of the bursts observed to be in the 'disk' population rises as the detector sensitivity improves. A careful reexamination of the numbers of bursts in the two populations for the pre-BATSE databases could rule out this class of models. Similarly, it is predicted that different satellites will observe different relative numbers of bursts in the two classes for any model in which there are two different spatial distribiutions of the sources, or for models in which there is one spatial distribution of the sources that is sampled to different depths for the two classes. An important consequence of the disk plus halo model is that for the birthrate of the halo sources to be small compared to the birthrate of the disk sources, it is necessary for the halo sources to release many orders of magnitude more energy over their bursting lifetime than the disk sources. The halo bursts must also be much more luminous than the disk bursts; if this disk-halo model is correct, it is necessary to explain why the disk sources do not produce halo-type bursts.

  12. The impact of new Geant4-DNA cross section models on electron track structure simulations in liquid water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyriakou, I., E-mail: ikyriak@cc.uoi.gr; Šefl, M.; Department of Dosimetry and Application of Ionizing Radiation, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, 115 19 Prague

    The most recent release of the open source and general purpose Geant4 Monte Carlo simulation toolkit (Geant4 10.2 release) contains a new set of physics models in the Geant4-DNA extension for improving the modelling of low-energy electron transport in liquid water (<10 keV). This includes updated electron cross sections for excitation, ionization, and elastic scattering. In the present work, the impact of these developments to track-structure calculations is examined for providing the first comprehensive comparison against the default physics models of Geant4-DNA. Significant differences with the default models are found for the average path length and penetration distance, as well asmore » for dose-point-kernels for electron energies below a few hundred eV. On the other hand, self-irradiation absorbed fractions for tissue-like volumes and low-energy electron sources (including some Auger emitters) reveal rather small differences (up to 15%) between these new and default Geant4-DNA models. The above findings indicate that the impact of the new developments will mainly affect those applications where the spatial pattern of interactions and energy deposition of very-low energy electrons play an important role such as, for example, the modelling of the chemical and biophysical stage of radiation damage to cells.« less

  13. Clawpack: Building an open source ecosystem for solving hyperbolic PDEs

    USGS Publications Warehouse

    Iverson, Richard M.; Mandli, K.T.; Ahmadia, Aron J.; Berger, M.J.; Calhoun, Donna; George, David L.; Hadjimichael, Y.; Ketcheson, David I.; Lemoine, Grady L.; LeVeque, Randall J.

    2016-01-01

    Clawpack is a software package designed to solve nonlinear hyperbolic partial differential equations using high-resolution finite volume methods based on Riemann solvers and limiters. The package includes a number of variants aimed at different applications and user communities. Clawpack has been actively developed as an open source project for over 20 years. The latest major release, Clawpack 5, introduces a number of new features and changes to the code base and a new development model based on GitHub and Git submodules. This article provides a summary of the most significant changes, the rationale behind some of these changes, and a description of our current development model. Clawpack: building an open source ecosystem for solving hyperbolic PDEs.

  14. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  15. UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario

    PubMed Central

    Porada, Eugeniusz; Szyszkowicz, Mieczysław

    2016-01-01

    UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA), was used to model volatile organic compound (VOC) receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases), industrial point sources (plastic-, polymer-, and metalworking manufactures), and in secondary sources (releases from water, sediments, and contaminated urban soil). The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution. PMID:29051416

  16. Improving bioaerosol exposure assessments of composting facilities — Comparative modelling of emissions from different compost ages and processing activities

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.

    We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.

  17. Modeling tidal exchange and dispersion in Boston Harbor

    USGS Publications Warehouse

    Signell, Richard P.; Butman, Bradford

    1992-01-01

    Tidal dispersion and the horizontal exchange of water between Boston Harbor and the surrounding ocean are examined with a high-resolution (200 m) depth-averaged numerical model. The strongly varying bathymetry and coastline geometry of the harbor generate complex spatial patterns in the modeled tidal currents which are verified by shipboard acoustic Doppler surveys. Lagrangian exchange experiments demonstrate that tidal currents rapidly exchange and mix material near the inlets of the harbor due to asymmetry in the ebb/flood response. This tidal mixing zone extends roughly a tidal excursion from the inlets and plays an important role in the overall flushing of the harbor. Because the tides can only efficiently mix material in this limited region, however, harbor flushing must be considered a two step process: rapid exchange in the tidal mixing zone, followed by flushing of the tidal mixing zone by nontidal residual currents. Estimates of embayment flushing based on tidal calculations alone therefore can significantly overestimate the flushing time that would be expected under typical environmental conditions. Particle-release simulations from point sources also demonstrate that while the tides efficiently exchange material in the vicinity of the inlets, the exact nature of dispersion from point sources is extremely sensitive to the timing and location of the release, and the distribution of particles is streaky and patchlike. This suggests that high-resolution modeling of dispersion from point sources in these regions must be performed explicitly and cannot be parameterized as a plume with Gaussian-spreading in a larger scale flow field.

  18. Regulatory Technology Development Plan - Sodium Fast Reactor: Mechanistic Source Term – Trial Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-10-01

    The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less

  19. The Mock LISA Data Challenge Round 3: New and Improved Sources

    NASA Technical Reports Server (NTRS)

    Baker, John

    2008-01-01

    The Mock LISA Data Challenges are a program to demonstrate and encourage the development of data-analysis capabilities for LISA. Each round of challenges consists of several data sets containing simulated instrument noise and gravitational waves from sources of undisclosed parameters. Participants are asked to analyze the data sets and report the maximum information they can infer about the source parameters. The challenges are being released in rounds of increasing complexity and realism. Challenge 3. currently in progress, brings new source classes, now including cosmic-string cusps and primordial stochastic backgrounds, and more realistic signal models for supermassive black-hole inspirals and galactic double white dwarf binaries.

  20. Exploring the Differences Between the European (SHARE) and the Reference Italian Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Visini, F.; Meletti, C.; D'Amico, V.; Rovida, A.; Stucchi, M.

    2014-12-01

    The recent release of the probabilistic seismic hazard assessment (PSHA) model for Europe by the SHARE project (Giardini et al., 2013, www.share-eu.org) arises questions about the comparison between its results for Italy and the official Italian seismic hazard model (MPS04; Stucchi et al., 2011) adopted by the building code. The goal of such a comparison is identifying the main input elements that produce the differences between the two models. It is worthwhile to remark that each PSHA is realized with data and knowledge available at the time of the release. Therefore, even if a new model provides estimates significantly different from the previous ones that does not mean that old models are wrong, but probably that the current knowledge is strongly changed and improved. Looking at the hazard maps with 10% probability of exceedance in 50 years (adopted as the standard input in the Italian building code), the SHARE model shows increased expected values with respect to the MPS04 model, up to 70% for PGA. However, looking in detail at all output parameters of both the models, we observe a different behaviour for other spectral accelerations. In fact, for spectral periods greater than 0.3 s, the current reference PSHA for Italy proposes higher values than the SHARE model for many and large areas. This observation suggests that this behaviour could not be due to a different definition of seismic sources and relevant seismicity rates; it mainly seems the result of the adoption of recent ground-motion prediction equations (GMPEs) that estimate higher values for PGA and for accelerations with periods lower than 0.3 s and lower values for higher periods with respect to old GMPEs. Another important set of tests consisted in analysing separately the PSHA results obtained by the three source models adopted in SHARE (i.e., area sources, fault sources with background, and a refined smoothed seismicity model), whereas MPS04 only uses area sources. Results seem to confirm the strong impact of the new generation GMPEs on the seismic hazard estimates. Giardini D. et al., 2013. Seismic Hazard Harmonization in Europe (SHARE): Online Data Resource, doi:10.12686/SED-00000001-SHARE. Stucchi M. et al., 2011. Seismic Hazard Assessment (2003-2009) for the Italian Building Code. Bull. Seismol. Soc. Am. 101, 1885-1911.

  1. FACTORS RELATING TO THE RELEASE OF STACHYBOTRYS CHARTARUM SPORES FROM CONTAMINATED SOURCES

    EPA Science Inventory

    The paper describes preliminary results of a research project to determine the factors that control the release of S. chartarum spores from a contaminated source and test ways to reduce spore release and thus exposure. As anticipated, S. chartarum spore emissions from gypsum boar...

  2. BORON RELEASE FROM WEATHERING ILLITES, SERPENTINE, SHALES, AND ILLITIC/PALYGORSKITIC SOILS

    EPA Science Inventory

    Despite extensive research on B adsorption and release from soils, mineral sources of B within natively high B soils remain poorly under- stood. The objectives of this study were to identify source minerals contributing to the continued B release after extraction of soluble B and...

  3. Experimental study of the thermal-acoustic efficiency in a long turbulent diffusion-flame burner

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.

    1983-01-01

    An acoustic source/propagation model is used to interpret measured noise spectra from a long turbulent burner. The acoustic model is based on the perturbation solution of the equations describing the unsteady one-dimensional flow of an inviscid ideal gas with a distributed heat source. The model assumes that the measured noise spectra are due uniquely to the unsteady component of combustion heat release. The model was applied to a long cylindrical hydrogen burner operating over a range of power levels between 4.5 kW and 22.3 kW. Acoustic impedances at the inlet to the burner and at the exit of the tube downstream of the burner were measured and are used as boundary conditions for the model. These measured impedances are also presented.

  4. Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution and Eruption

    NASA Astrophysics Data System (ADS)

    Leake, J. E.; Linton, M.; Schuck, P. W.

    2017-12-01

    Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the recent development of coronal models which are "data-driven" at the photosphere. Using magnetohydrodynamic simulations of active region formation and our recently created validation framework we investigate the source of errors in data-driven models that use surface measurements of the magnetic field, and derived MHD quantities, to model the coronal magnetic field. The primary sources of errors in these studies are the temporal and spatial resolution of the surface measurements. We will discuss the implications of theses studies for accurately modeling the build up and release of coronal magnetic energy based on photospheric magnetic field observations.

  5. Science and Software

    NASA Astrophysics Data System (ADS)

    Zelt, C. A.

    2017-12-01

    Earth science attempts to understand how the earth works. This research often depends on software for modeling, processing, inverting or imaging. Freely sharing open-source software is essential to prevent reinventing the wheel and allows software to be improved and applied in ways the original author may never have envisioned. For young scientists, releasing software can increase their name ID when applying for jobs and funding, and create opportunities for collaborations when scientists who collect data want the software's creator to be involved in their project. However, we frequently hear scientists say software is a tool, it's not science. Creating software that implements a new or better way of earth modeling or geophysical processing, inverting or imaging should be viewed as earth science. Creating software for things like data visualization, format conversion, storage, or transmission, or programming to enhance computational performance, may be viewed as computer science. The former, ideally with an application to real data, can be published in earth science journals, the latter possibly in computer science journals. Citations in either case should accurately reflect the impact of the software on the community. Funding agencies need to support more software development and open-source releasing, and the community should give more high-profile awards for developing impactful open-source software. Funding support and community recognition for software development can have far reaching benefits when the software is used in foreseen and unforeseen ways, potentially for years after the original investment in the software development. For funding, an open-source release that is well documented should be required, with example input and output files. Appropriate funding will provide the incentive and time to release user-friendly software, and minimize the need for others to duplicate the effort. All funded software should be available through a single web site, ideally maintained by someone in a funded position. Perhaps the biggest challenge is the reality that researches who use software, as opposed to develop software, are more attractive university hires because they are more likely to be "big picture" scientists that publish in the highest profile journals, although sometimes the two go together.

  6. Modeling Streamflow and Water Temperature in the North Santiam and Santiam Rivers, Oregon, 2001-02

    USGS Publications Warehouse

    Sullivan, Annett B.; Roundsk, Stewart A.

    2004-01-01

    To support the development of a total maximum daily load (TMDL) for water temperature in the Willamette Basin, the laterally averaged, two-dimensional model CE-QUAL-W2 was used to construct a water temperature and streamflow model of the Santiam and North Santiam Rivers. The rivers were simulated from downstream of Detroit and Big Cliff dams to the confluence with the Willamette River. Inputs to the model included bathymetric data, flow and temperature from dam releases, tributary flow and temperature, and meteorologic data. The model was calibrated for the period July 1 through November 21, 2001, and confirmed with data from April 1 through October 31, 2002. Flow calibration made use of data from two streamflow gages and travel-time and river-width data. Temperature calibration used data from 16 temperature monitoring locations in 2001 and 5 locations in 2002. A sensitivity analysis was completed by independently varying input parameters, including point-source flow, air temperature, flow and water temperature from dam releases, and riparian shading. Scenario analyses considered hypothetical river conditions without anthropogenic heat inputs, with restored riparian vegetation, with minimum streamflow from the dams, and with a more-natural seasonal water temperature regime from dam releases.

  7. The energy source of the most energetic giant outbursts in MS 0735 + 7421

    NASA Astrophysics Data System (ADS)

    Li, Shuang-Liang

    2013-02-01

    In this work, we investigate the power source of the most energetic giant outbursts in MS 0735 + 7421, which released ~ 1062 erg of energy. Due to the very high mean jet power in the cavities (P jet/L Edd ~ 0.02), we produce several jet formation models based on a relativistic thin disk model, i.e., general BP + BZ mechanisms (model A), Livio's (model B) and Meier's (model C) model, to explain the giant outbursts in AGNs. It is found that the energy provided by both model B and model C are inadequate for an initial black hole spin a 0 ~ 0.1, only model A can explain the most violent outbursts in MS 0735 + 7421. But if the initial black hole spin a 0 ~ 0.95, model B can also blow up the cavity. The final spin of the black hole is found to be very high in spite of the initial spin.

  8. Is chemical heating a major cause of the mesosphere inversion layer?

    NASA Technical Reports Server (NTRS)

    Meriwether, John W.; Mlynczak, Martin G.

    1995-01-01

    A region of thermal enhancement of the mesosphere has been detected on numerous occasions by in situ measurements, remote sensing from space, and lidar techniques. The source of these 'temperature inversion layers' has been attributed in the literature to the dissipation relating to dynamical forcing by gravity wave or tidal activity. However, evidence that gravity wave breaking can produce the inversion layer with amplitude as large as that observed in lidar measurements has been limited to results of numerical modeling. An alternative source for the production of the thermal inversion layer in the mesosphere is the direct deposition of heat by exothermic chemical reactions. Two-dimensional modeling combining a comprehensive model of the mesosphere photochemistry with the dynamical transport of long-lived species shows that the region from 80 to 95 km may be heated as much as 3 to 10 K/d during the night and half this rate during the day. Given the uncertainties in our understanding of the dynamics and chemistry for the mesopause region, separating the two sources by passive observations of the mesosphere thermal structure looks to be difficult. Therefore we have considered an active means for producing a mesopause thermal layer, namely the release of ozone into the upper mesosphere from a rocket payload. The induced effects would include artificial enhancements of the OH and Na airglow intensities as well as the mesopause thermal structure. The advantages of the rocket release of ozone is that detection of these effects by ground-based imaging, radar, and lidar systems and comparison of these effects with model predictions would help quantify the partition of the artificial inversion layer production into sources of dynamical and chemical forcing.

  9. Fluoride and phosphate release from carbonate-rich fluorapatite during managed aquifer recharge

    NASA Astrophysics Data System (ADS)

    Schafer, David; Donn, Michael; Atteia, Olivier; Sun, Jing; MacRae, Colin; Raven, Mark; Pejcic, Bobby; Prommer, Henning

    2018-07-01

    Managed aquifer recharge (MAR) is increasingly used as a water management tool to enhance water availability and to improve water quality. Until now, however, the risk of fluoride release during MAR with low ionic strength injectate has not been recognised or examined. In this study we analyse and report the mobilisation of fluoride (up to 58 μM) and filterable reactive phosphorus (FRP) (up to 55 μM) during a field groundwater replenishment experiment in which highly treated, deionised wastewater (average TDS 33 mg/L) was injected into a siliciclastic Cretaceous aquifer. In the field experiment, maximum concentrations, which coincided with a rise in pH, exceeded background groundwater concentrations by an average factor of 3.6 for fluoride and 24 for FRP. The combined results from the field experiment, a detailed mineralogical characterisation and geochemical modelling suggested carbonate-rich fluorapatite (CFA: Ca10(PO4)5(CO3,F)F2) to be the most likely source of fluoride and phosphate release. An anoxic batch experiment with powdered CFA-rich nodules sourced from the target aquifer and aqueous solutions of successively decreasing ionic strength closely replicated the field-observed fluoride and phosphate behaviour. Based on the laboratory experiment and geochemical modelling, we hypothesise that the release of fluoride and phosphate results from the incongruent dissolution of CFA and the simultaneous formation of a depleted layer that has hydrated di-basic calcium phosphate (CaHPO4·nH2O) composition at the CFA-water interface. Disequilibrium caused by calcium removal following breakthrough of the deionised injectate triggered the release of fluoride and phosphate. Given the increasing use of highly treated, deionised water for MAR and the ubiquitous presence of CFA and fluorapatite (Ca10(PO4)6F2) in aquifer settings worldwide, the risk of fluoride and phosphate release needs to be considered in the MAR design process.

  10. A large mantle water source for the northern San Andreas Fault System: A ghost of subduction past

    USGS Publications Warehouse

    Kirby, Stephen H.; Wang, Kelin; Brocher, Thomas M.

    2014-01-01

    Recent research indicates that the shallow mantle of the Cascadia subduction margin under near-coastal Pacific Northwest U.S. is cold and partially serpentinized, storing large quantities of water in this wedge-shaped region. Such a wedge probably formed to the south in California during an earlier period of subduction. We show by numerical modeling that after subduction ceased with the creation of the San Andreas Fault System (SAFS), the mantle wedge warmed, slowly releasing its water over a period of more than 25 Ma by serpentine dehydration into the crust above. This deep, long-term water source could facilitate fault slip in San Andreas System at low shear stresses by raising pore pressures in a broad region above the wedge. Moreover, the location and breadth of the water release from this model gives insights into the position and breadth of the SAFS. Such a mantle source of water also likely plays a role in the occurrence of Non-Volcanic Tremor (NVT) that has been reported along the SAFS in central California. This process of water release from mantle depths could also mobilize mantle serpentinite from the wedge above the dehydration front, permitting upward emplacement of serpentinite bodies by faulting or by diapiric ascent. Specimens of serpentinite collected from tectonically emplaced serpentinite blocks along the SAFS show mineralogical and structural evidence of high fluid pressures during ascent from depth. Serpentinite dehydration may also lead to tectonic mobility along other plate boundaries that succeed subduction, such as other continental transforms, collision zones, or along present-day subduction zones where spreading centers are subducting.

  11. In Vitro Enzymatic Depolymerization of Lignin with Release of Syringyl, Guaiacyl, and Tricin Units

    PubMed Central

    Gall, Daniel L.; Kontur, Wayne S.; Lan, Wu; Kim, Hoon; Li, Yanding; Ralph, John

    2017-01-01

    ABSTRACT New environmentally sound technologies are needed to derive valuable compounds from renewable resources. Lignin, an abundant polymer in terrestrial plants comprised predominantly of guaiacyl and syringyl monoaromatic phenylpropanoid units, is a potential natural source of aromatic compounds. In addition, the plant secondary metabolite tricin is a recently discovered and moderately abundant flavonoid in grasses. The most prevalent interunit linkage between guaiacyl, syringyl, and tricin units is the β-ether linkage. Previous studies have shown that bacterial β-etherase pathway enzymes catalyze glutathione-dependent cleavage of β-ether bonds in dimeric β-ether lignin model compounds. To date, however, it remains unclear whether the known β-etherase enzymes are active on lignin polymers. Here we report on enzymes that catalyze β-ether cleavage from bona fide lignin, under conditions that recycle the cosubstrates NAD+ and glutathione. Guaiacyl, syringyl, and tricin derivatives were identified as reaction products when different model compounds or lignin fractions were used as substrates. These results demonstrate an in vitro enzymatic system that can recycle cosubstrates while releasing aromatic monomers from model compounds as well as natural and engineered lignin oligomers. These findings can improve the ability to produce valuable aromatic compounds from a renewable resource like lignin. IMPORTANCE Many bacteria are predicted to contain enzymes that could convert renewable carbon sources into substitutes for compounds that are derived from petroleum. The β-etherase pathway present in sphingomonad bacteria could cleave the abundant β–O–4-aryl ether bonds in plant lignin, releasing a biobased source of aromatic compounds for the chemical industry. However, the activity of these enzymes on the complex aromatic oligomers found in plant lignin is unknown. Here we demonstrate biodegradation of lignin polymers using a minimal set of β-etherase pathway enzymes, the ability to recycle needed cofactors (glutathione and NAD+) in vitro, and the release of guaiacyl, syringyl, and tricin as depolymerized products from lignin. These observations provide critical evidence for the use and future optimization of these bacterial β-etherase pathway enzymes for industrial-level biotechnological applications designed to derive high-value monomeric aromatic compounds from lignin. PMID:29180366

  12. LIGHT NONAQUEOUS-PHASE LIQUID HYDROCARBON WEATHERING AT SOME JP-4 FUEL RELEASE SITES

    EPA Science Inventory

    A fuel weathering study was conducted for database entries to estimate natural light, nonaqueousphase
    liquid weathering and source-term reduction rates for use in natural attenuation models. A range of BTEX
    weathering rates from mobile LNAPL plumes at eight field sites with...

  13. How Soft Gamma Repeaters Might Make Fast Radio Bursts

    NASA Astrophysics Data System (ADS)

    Katz, J. I.

    2016-08-01

    There are several phenomenological similarities between soft gamma repeaters (SGRs) and fast radio bursts (FRBs), including duty factors, timescales, and repetition. The sudden release of magnetic energy in a neutron star magnetosphere, as in popular models of SGRs, can meet the energy requirements of FRBs, but requires both the presence of magnetospheric plasma, in order for dissipation to occur in a transparent region, and a mechanism for releasing much of that energy quickly. FRB sources and SGRs are distinguished by long-lived (up to thousands of years) current-carrying coronal arches remaining from the formation of the young neutron star, and their decay ends the phase of SGR/AXP/FRB activity even though “magnetar” fields may persist. Runaway increases in resistance when the current density exceeds a threshold, releases magnetostatic energy in a sudden burst, and produces high brightness GHz emission of FRB by a coherent process. SGRs are produced when released energy thermalizes as an equlibrium pair plasma. The failures of some alternative FRB models and the non-detection of SGR 1806-20 at radio frequencies are discussed in the appendices.

  14. HOW SOFT GAMMA REPEATERS MIGHT MAKE FAST RADIO BURSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, J. I., E-mail: katz@wuphys.wustl.edu

    2016-08-01

    There are several phenomenological similarities between soft gamma repeaters (SGRs) and fast radio bursts (FRBs), including duty factors, timescales, and repetition. The sudden release of magnetic energy in a neutron star magnetosphere, as in popular models of SGRs, can meet the energy requirements of FRBs, but requires both the presence of magnetospheric plasma, in order for dissipation to occur in a transparent region, and a mechanism for releasing much of that energy quickly. FRB sources and SGRs are distinguished by long-lived (up to thousands of years) current-carrying coronal arches remaining from the formation of the young neutron star, and theirmore » decay ends the phase of SGR/AXP/FRB activity even though “magnetar” fields may persist. Runaway increases in resistance when the current density exceeds a threshold, releases magnetostatic energy in a sudden burst, and produces high brightness GHz emission of FRB by a coherent process. SGRs are produced when released energy thermalizes as an equlibrium pair plasma. The failures of some alternative FRB models and the non-detection of SGR 1806-20 at radio frequencies are discussed in the appendices.« less

  15. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  16. Testing high resolution numerical models for analysis of contaminant storage and release from low permeability zones

    NASA Astrophysics Data System (ADS)

    Chapman, Steven W.; Parker, Beth L.; Sale, Tom C.; Doner, Lee Ann

    2012-08-01

    It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making.

  17. Kinetic Modeling of Slow Energy Release in Non-Ideal Carbon Rich Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P; Fried, L; Glaesemann, K

    2006-06-20

    We present here the first self-consistent kinetic based model for long time-scale energy release in detonation waves in the non-ideal explosive LX-17. Non-ideal, insensitive carbon rich explosives, such as those based on TATB, are believed to have significant late-time slow release in energy. One proposed source of this energy is diffusion-limited growth of carbon clusters. In this paper we consider the late-time energy release problem in detonation waves using the thermochemical code CHEETAH linked to a multidimensional ALE hydrodynamics model. The linked CHEETAH-ALE model dimensional treats slowly reacting chemical species using kinetic rate laws, with chemical equilibrium assumed for speciesmore » coupled via fast time-scale reactions. In the model presented here we include separate rate equations for the transformation of the un-reacted explosive to product gases and for the growth of a small particulate form of condensed graphite to a large particulate form. The small particulate graphite is assumed to be in chemical equilibrium with the gaseous species allowing for coupling between the instantaneous thermodynamic state and the production of graphite clusters. For the explosive burn rate a pressure dependent rate law was used. Low pressure freezing of the gas species mass fractions was also included to account for regions where the kinetic coupling rates become longer than the hydrodynamic time-scales. The model rate parameters were calibrated using cylinder and rate-stick experimental data. Excellent long time agreement and size effect results were achieved.« less

  18. Modeling Human Exposure to Indoor Contaminants: External Source to Body Tissues.

    PubMed

    Webster, Eva M; Qian, Hua; Mackay, Donald; Christensen, Rebecca D; Tietjen, Britta; Zaleski, Rosemary

    2016-08-16

    Information on human indoor exposure is necessary to assess the potential risk to individuals from many chemicals of interest. Dynamic indoor and human physicologically based pharmacokinetic (PBPK) models of the distribution of nonionizing, organic chemical concentrations in indoor environments resulting in delivered tissue doses are developed, described and tested. The Indoor model successfully reproduced independently measured, reported time-dependent air concentrations of chloroform released during showering and of 2-butyoxyethanol following use of a volatile surface cleaner. The Indoor model predictions were also comparable to those from a higher tier consumer model (ConsExpo 4.1) for the surface cleaner scenario. The PBPK model successful reproduced observed chloroform exhaled air concentrations resulting from an inhalation exposure. Fugacity based modeling provided a seamless description of the partitioning, fluxes, accumulation and release of the chemical in indoor media and tissues of the exposed subject. This has the potential to assist in health risk assessments, provided that appropriate physical/chemical property, usage characteristics, and toxicological information are available.

  19. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  20. Theory for Deducing Volcanic Activity From Size Distributions in Plinian Pyroclastic Fall Deposits

    NASA Astrophysics Data System (ADS)

    Iriyama, Yu; Toramaru, Atsushi; Yamamoto, Tetsuo

    2018-03-01

    Stratigraphic variation in the grain size distribution (GSD) of plinian pyroclastic fall deposits reflects volcanic activity. To extract information on volcanic activity from the analyses of deposits, we propose a one-dimensional theory that provides a formula connecting the sediment GSD to the source GSD. As the simplest case, we develop a constant-source model (CS model), in which the source GSD and the source height are constant during the duration of release of particles. We assume power laws of particle radii for the terminal fall velocity and the source GSD. The CS model can describe an overall (i.e., entire vertically variable) feature of the GSD structure of the sediment. It is shown that the GSD structure is characterized by three parameters, that is, the duration of supply of particles to the source scaled by the fall time of the largest particle, ts/tM, and the power indices of the terminal fall velocity p and of the source GSD q. We apply the CS model to samples of the Worzel D ash layer and compare the sediment GSD structure calculated by using the CS model to the observed structure. The results show that the CS model reproduces the overall structure of the observed GSD. We estimate the duration of the eruption and the q value of the source GSD. Furthermore, a careful comparison of the observed and calculated GSDs reveals new interpretation of the original sediment GSD structure of the Worzel D ash layer.

  1. Reproducibility of Interferon Gamma (IFN-γ) Release Assays. A Systematic Review

    PubMed Central

    Tagmouti, Saloua; Slater, Madeline; Benedetti, Andrea; Kik, Sandra V.; Banaei, Niaz; Cattamanchi, Adithya; Metcalfe, John; Dowdy, David; van Zyl Smit, Richard; Dendukuri, Nandini

    2014-01-01

    Rationale: Interferon gamma (IFN-γ) release assays for latent tuberculosis infection result in a larger-than-expected number of conversions and reversions in occupational screening programs, and reproducibility of test results is a concern. Objectives: Knowledge of the relative contribution and extent of the individual sources of variability (immunological, preanalytical, or analytical) could help optimize testing protocols. Methods: We performed a systematic review of studies published by October 2013 on all potential sources of variability of commercial IFN-γ release assays (QuantiFERON-TB Gold In-Tube and T-SPOT.TB). The included studies assessed test variability under identical conditions and under different conditions (the latter both overall and stratified by individual sources of variability). Linear mixed effects models were used to estimate within-subject SD. Measurements and Main Results: We identified a total of 26 articles, including 7 studies analyzing variability under the same conditions, 10 studies analyzing variability with repeat testing over time under different conditions, and 19 studies reporting individual sources of variability. Most data were on QuantiFERON (only three studies on T-SPOT.TB). A considerable number of conversions and reversions were seen around the manufacturer-recommended cut-point. The estimated range of variability of IFN-γ response in QuantiFERON under identical conditions was ±0.47 IU/ml (coefficient of variation, 13%) and ±0.26 IU/ml (30%) for individuals with an initial IFN-γ response in the borderline range (0.25–0.80 IU/ml). The estimated range of variability in noncontrolled settings was substantially larger (±1.4 IU/ml; 60%). Blood volume inoculated into QuantiFERON tubes and preanalytic delay were identified as key sources of variability. Conclusions: This systematic review shows substantial variability with repeat IFN-γ release assays testing even under identical conditions, suggesting that reversions and conversions around the existing cut-point should be interpreted with caution. PMID:25188809

  2. CrossWater - Modelling micropollutant loads from different sources in the Rhine basin

    NASA Astrophysics Data System (ADS)

    Moser, Andreas; Bader, Hans-Peter; Fenicia, Fabrizio; Scheidegger, Ruth; Stamm, Christian

    2015-04-01

    The contamination of fresh surface waters with micropollutants originating from various sources is a growing environmental issue. The challenges for an effective political regulation are numerous, particularly for international water basins. One prerequisite for effective management is the knowledge of water quality across different parts of a basin. In this study within the Rhine basin, the spatial patterns of micropollutant loads and concentrations from different use classes are investigated with a mass flow analysis and compared to the established territorial jurisdictions on micropollutants and water quality. The source area of micropollutants depends on the specific use of a compound. The focus of this study is on i) herbicides from agricultural landuse, ii) biocides from material protection on buildings and iii) human pharmaceuticals from households. The total mass of micropollutants available for release to the stream network is estimated based on statistical application and consumption data. Based on GIS data of agricultural landuse, vector data of buildings, wastewater treatment plant (WWTP) locations, respectively, the available mass of micropollutants is spatially distributed to the catchment areas. The actual release of micropollutants to the stream network is calculated with empirical loss rates related to river discharge for agricultural herbicides and to precipitation for biocides. For the pharmaceuticals the release is coupled to the metabolism rates and elimination rates in WWTP. For a first approximation national sales are downscaled to the catchment level to specify the available mass for selected model compounds (agricultural herbicides: Isoproturon, biocides: Carbendazim, human pharmaceuticals: Carbamazepine and Diclofenac). The available mass of herbicides and biocides is multiplied with empirical loss rates independent from discharge or precipitation to calculate the loads. The release of the pharmaceuticals was calculated by multiplying average consumption numbers with the person equivalent of the WWTP and the elimination rates. The comparison of pollutant loads to 7-day composite samples of all compounds at 15 locations along the Rhine yield plausible results.

  3. Atmospheric modeling of Mars CH4 subsurface clathrates releases mimicking SAM and 2003 Earth-based detections

    NASA Astrophysics Data System (ADS)

    Pla-García, J.; Rafkin, S. C.

    2017-12-01

    The aim of this work is to establish the amount of mixing during all martian seasons to test whether CH4 releases inside or outside of Gale crater are consistent with MSL-SAM observations. Several modeling scenarios were configured, including instantaneous and steady releases, both inside and outside the crater. A simulation to mimic the 2003 Earth-based detections (Mumma et al. 2009 or M09) was also performed. In the instantaneous release inside Gale experiments, Ls270 was shown to be the faster mixing season when air within and outside the crater was well mixed: all tracer mass inside the crater is diluted after just 8 hours. The mixing of near surface crater air with the external environment in the rest of the year is potentially rapid but slower than Ls270.In the instantaneous release outside Gale (NW) experiment, in just 12 hours the CH4 that makes it to the MSL landing location is diluted by six orders of magnitude. The timescale of mixing in MRAMS experiments is on the order of 1 sol regardless of season. The duration of the CH4 peak observed by SAM is 100 sols. Therefore there is a steady release inside the crater, or there is a very large magnitude steady release outside the crater. In the steady release Gale experiments, CH4 flux rate from ground is 1.8 kg m-2 s-1 (derived from Gloesener et al. 2017 clathrates fluxes) and it is not predictive. In these experiments, 200 times lower CH4 values detected by SAM are modeled around MSL location. There are CH4 concentration variations of orders of magnitude depending on the hour, so timing of SAM measurements is important. With a larger (but further away) outside crater release area compared to inside, similar CH4 values around MSL are modeled, so distance to source is important. In the steady experiments mimicking M09 detection release area, only 12 times lower CH4 values detected by SAM are modeled around MSL. The highest value in the M09 modeled scenario (0.6 ppbv) is reached in Ls270. This value is the highest of all modeled experiments. With our initial conditions (flux rates, release area size and distance to MSL), SAM should not be able (or very difficult) to detect CH4, but if we multiply flux by 12, increase the release area or move it closer to MSL (or all of above), it may be possible to get CH4 values that SAM could detect regardless where it comes from: inside, outside (close to) or far away from Gale.

  4. National Atmospheric Release Advisory Center dispersion modeling of the Full-scale Radiological Dispersal device (FSRDD) field trials

    DOE PAGES

    Neuscamman, Stephanie J.; Yu, Kristen L.

    2016-05-01

    The results of the National Atmospheric Release Advisory Center (NARAC) model simulations are compared to measured data from the Full-Scale Radiological Dispersal Device (FSRDD) field trials. The series of explosive radiological dispersal device (RDD) experiments was conducted in 2012 by Defence Research and Development Canada (DRDC) and collaborating organizations. During the trials, a wealth of data was collected, including a variety of deposition and air concentration measurements. The experiments were conducted with one of the stated goals being to provide measurements to atmospheric dispersion modelers. These measurements can be used to facilitate important model validation studies. For this study, meteorologicalmore » observations recorded during the tests are input to the diagnostic meteorological model, ADAPT, which provides 3–D, time-varying mean wind and turbulence fields to the LODI dispersion model. LODI concentration and deposition results are compared to the measured data, and the sensitivity of the model results to changes in input conditions (such as the particle activity size distribution of the source) and model physics (such as the rise of the buoyant cloud of explosive products) is explored. The NARAC simulations predicted the experimentally measured deposition results reasonably well considering the complexity of the release. Lastly, changes to the activity size distribution of the modeled particles can improve the agreement of the model results to measurement.« less

  5. Chemical transport model simulations of organic aerosol in southern California: model evaluation and gasoline and diesel source contributions

    NASA Astrophysics Data System (ADS)

    Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.

    2017-03-01

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in SOA production. Model predictions highlighted the need to better constrain multi-generational oxidation reactions in chemical transport models.

  6. The Chandra Source Catalog 2.0: Spectral Properties

    NASA Astrophysics Data System (ADS)

    McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team

    2018-01-01

    The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  7. An Extensible NetLogo Model for Visualizing Message Routing Protocols

    DTIC Science & Technology

    2017-08-01

    the hard sciences to the social sciences to computer-generated art. NetLogo represents the world as a set of...describe the model is shown here; for the supporting methods , refer to the source code. Approved for public release; distribution is unlimited. 4 iv...if ticks - last-inject > time-to-inject [inject] if run# > #runs [stop] end Next, we present some basic statistics collected for the

  8. Modeling and identifying the sources of radiocesium contamination in separate sewerage systems.

    PubMed

    Pratama, Mochamad Adhiraga; Yoneda, Minoru; Yamashiki, Yosuke; Shimada, Yoko; Matsui, Yasuto

    2018-05-01

    The Fukushima Dai-ichi nuclear power plant accident released radiocesium in large amounts. The released radionuclides contaminated much of the surrounding environment, including sewers in urban areas of Fukushima prefecture. In this study we attempted to identify and quantify the sources of radiocesium contamination in separate sewerage systems and developed a compartment model based on the Radionuclide Migration in Urban Environments and Drainage Systems (MUD) model. Measurements of the time-dependent radiocesium concentration in sewer sludge combined with meteorological, demographic, and radiocesium dietary intake data indicated that rainfall-derived inflow and infiltration (RDII) and human excretion were the chief contributors of radiocesium contamination in a separate sewerage system. The quantities of contamination derived from RDII and human excretion were calculated and used in the modified MUD model to simulate radiocesium contamination in sewers in three urban areas in Fukushima prefecture: Fukushima, Koriyama, and Nihonmatsu Cities. The Nash efficiency coefficient (0.88-0.92) and determination coefficient (0.89-0.93) calculated in an evaluation of our compartment model indicated that the model produced satisfactory results. We also used the model to estimate the total volume of sludge with radiocesium concentrations in excess of the clearance level, based on the number of months elapsed after the accident. Estimations by our model suggested that wastewater treatment plants (WWTPs) in Fukushima, Koriyama, and Nihonmatsu generated about 1,750,000m 3 of radioactive sludge in total, a level in good agreement with the real data. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Assessment of Heterotrophic Growth Supported by Soluble Microbial Products in Anammox Biofilm using Multidimensional Modeling

    PubMed Central

    Liu, Yiwen; Sun, Jing; Peng, Lai; Wang, Dongbo; Dai, Xiaohu; Ni, Bing-Jie

    2016-01-01

    Anaerobic ammonium oxidation (anammox) is known to autotrophically convert ammonium to dinitrogen gas with nitrite as the electron acceptor, but little is known about their released microbial products and how these are relative to heterotrophic growth in anammox system. In this work, we applied a mathematical model to assess the heterotrophic growth supported by three key microbial products produced by bacteria in anammox biofilm (utilization associated products (UAP), biomass associated products (BAP), and decay released substrate). Both One-dimensional and two-dimensional numerical biofilm models were developed to describe the development of anammox biofilm as a function of the multiple bacteria–substrate interactions. Model simulations show that UAP of anammox is the main organic carbon source for heterotrophs. Heterotrophs are mainly dominant at the surface of the anammox biofilm with small fraction inside the biofilm. 1-D model is sufficient to describe the main substrate concentrations/fluxes within the anammox biofilm, while the 2-D model can give a more detailed biomass distribution. The heterotrophic growth on UAP is mainly present at the outside of anammox biofilm, their growth on BAP (HetB) are present throughout the biofilm, while the growth on decay released substrate (HetD) is mainly located in the inner layers of the biofilm. PMID:27273460

  10. Hydrodynamic modelling of the microbial water quality in a drinking water source as input for risk reduction management

    NASA Astrophysics Data System (ADS)

    Sokolova, Ekaterina; Pettersson, Thomas J. R.; Bergstedt, Olof; Hermansson, Malte

    2013-08-01

    To mitigate the faecal contamination of drinking water sources and, consequently, to prevent waterborne disease outbreaks, an estimation of the contribution from different sources to the total faecal contamination at the raw water intake of a drinking water treatment plant is needed. The aim of this article was to estimate how much different sources contributed to the faecal contamination at the water intake in a drinking water source, Lake Rådasjön in Sweden. For this purpose, the fate and transport of faecal indicator Escherichia coli within Lake Rådasjön were simulated by a three-dimensional hydrodynamic model. The calibrated hydrodynamic model described the measured data on vertical temperature distribution in the lake well (the Pearson correlation coefficient was 0.99). The data on the E. coli load from the identified contamination sources were gathered and the fate and transport of E. coli released from these sources within the lake were simulated using the developed hydrodynamic model, taking the decay of the E. coli into account. The obtained modelling results were compared to the observed E. coli concentrations at the water intake. The results illustrated that the sources that contributed the most to the faecal contamination at the water intake in Lake Rådasjön were the discharges from the on-site sewers and the main inflow to the lake - the river Mölndalsån. Based on the modelling results recommendations for water producers were formulated. The study demonstrated that this modelling approach is a useful tool for estimating the contribution from different sources to the faecal contamination at the water intake of a drinking water treatment plant and provided decision-support information for the reduction of risks posed to the drinking water source.

  11. A study of the sources and sinks of methane and methyl chloroform using a global three-dimensional Lagrangian tropospheric tracer transport model

    NASA Technical Reports Server (NTRS)

    Taylor, John A.; Brasseur, G. P.; Zimmerman, P. R.; Cicerone, R. J.

    1991-01-01

    Sources and sinks of methane and methyl chloroform are investigated using a global three-dimensional Lagrangian tropospheric tracer transport model with parameterized hydroxyl and temperature fields. Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). The second model identified source regions for methane from rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies, indicating that either the assumption that a uniform fraction of NPP is converted to methane is not valid for rice paddies, or that NPP is underestimated for rice paddies, or that present methane emission estimates from rice paddies are too high.

  12. Inhalation exposure to cleaning products: application of a two-zone model.

    PubMed

    Earnest, C Matt; Corsi, Richard L

    2013-01-01

    In this study, modifications were made to previously applied two-zone models to address important factors that can affect exposures during cleaning tasks. Specifically, we expand on previous applications of the two-zone model by (1) introducing the source in discrete elements (source-cells) as opposed to a complete instantaneous release, (2) placing source cells in both the inner (near person) and outer zones concurrently, (3) treating each source cell as an independent mixture of multiple constituents, and (4) tracking the time-varying liquid concentration and emission rate of each constituent in each source cell. Three experiments were performed in an environmentally controlled chamber with a thermal mannequin and a simplified pure chemical source to simulate emissions from a cleaning product. Gas phase concentration measurements were taken in the bulk air and in the breathing zone of the mannequin to evaluate the model. The mean ratio of the integrated concentration in the mannequin's breathing zone to the concentration in the outer zone was 4.3 (standard deviation, σ = 1.6). The mean ratio of measured concentration in the breathing zone to predicted concentrations in the inner zone was 0.81 (σ = 0.16). Intake fractions ranged from 1.9 × 10(-3) to 2.7 × 10(-3). Model results reasonably predict those of previous exposure monitoring studies and indicate the inadequacy of well-mixed single-zone model applications for some but not all cleaning events.

  13. Decay of Bacteroidales genetic markers in relation to traditional fecal indicators for water quality modeling of drinking water sources.

    PubMed

    Sokolova, Ekaterina; Aström, Johan; Pettersson, Thomas J R; Bergstedt, Olof; Hermansson, Malte

    2012-01-17

    The implementation of microbial fecal source tracking (MST) methods in drinking water management is limited by the lack of knowledge on the transport and decay of host-specific genetic markers in water sources. To address these limitations, the decay and transport of human (BacH) and ruminant (BacR) fecal Bacteroidales 16S rRNA genetic markers in a drinking water source (Lake Rådasjön in Sweden) were simulated using a microbiological model coupled to a three-dimensional hydrodynamic model. The microbiological model was calibrated using data from outdoor microcosm trials performed in March, August, and November 2010 to determine the decay of BacH and BacR markers in relation to traditional fecal indicators. The microcosm trials indicated that the persistence of BacH and BacR in the microcosms was not significantly different from the persistence of traditional fecal indicators. The modeling of BacH and BacR transport within the lake illustrated that the highest levels of genetic markers at the raw water intakes were associated with human fecal sources (on-site sewers and emergency sewer overflow). This novel modeling approach improves the interpretation of MST data, especially when fecal pollution from the same host group is released into the water source from different sites in the catchment.

  14. Shoot litter breakdown and zinc dynamics of an aquatic plant, Schoenoplectus californicus.

    PubMed

    Arreghini, Silvana; de Cabo, Laura; Serafini, Roberto José María; Fabrizio de Iorio, Alicia

    2018-07-03

    Decomposition of plant debris is an important process in determining the structure and function of aquatic ecosystems. The aims were to find a mathematic model fitting the decomposition process of Schoenoplectus californicus shoots containing different Zn concentrations; compare the decomposition rates; and assess metal accumulation/mobilization during decomposition. A litterbag technique was applied with shoots containing three levels of Zn: collected from an unpolluted river (RIV) and from experimental populations at low (LoZn) and high (HiZn) Zn supply. The double exponential model explained S. californicus shoot decomposition, at first, higher initial proportion of refractory fraction in RIV detritus determined a lower decay rate and until 68 days, RIV and LoZn detritus behaved like a source of metal, releasing soluble/weakly bound zinc into the water; after 68 days, they became like a sink. However, HiZn detritus showed rapid release into the water during the first 8 days, changing to the sink condition up to 68 days, and then returning to the source condition up to 369 days. The knowledge of the role of detritus (sink/source) will allow defining a correct management of the vegetation used for zinc removal and providing a valuable tool for environmental remediation and rehabilitation planning.

  15. Bayesian inference of stress release models applied to some Italian seismogenic zones

    NASA Astrophysics Data System (ADS)

    Rotondi, R.; Varini, E.

    2007-04-01

    In this paper, we evaluate the seismic hazard of a region in southern Italy by analysing stress release models from the Bayesian viewpoint; the data are drawn from the most recent version of the parametric catalogue of Italian earthquakes. For estimation we just use the events up to 1992, then we forecast the date of the next event through a stochastic simulation method and we compare the result with the really occurred shocks in the span 1993-2002. The original version of the stress release model, proposed by Vere-Jones in 1978, transposes Reid's elastic rebound theory in the framework of stochastic point processes. Since the nineties enriched versions of this model have appeared in the literature, applied to historical catalogues from China, Iran, Japan; they envisage the identification of independent or interacting tectonic subunits constituting the region under exam. It follows that the stress release models, designed for regional analyses, are evolving towards studies on fault segments, realizing some degree of convergence to those models that start from an individual fault and, considering the interactions with nearby segments, are driven to studies on regional scale. The optimal performance of the models we consider depends on a set of choices among which: the seismogenic region and possible subzones, the threshold magnitude, the length of the time period. In this paper, we focus our attention on the influence of the subdivision of the region under exam into tectonic units; in the light of the recent studies on the fault segmentation model of Italy we propose a partition of Sannio-Matese-Ofanto-Irpinia, one of the most seismically active region in southern Italy. The results show that the performance of the stress release models improves in terms of both fitting and forecasting when the region is split up into parts including new information about potential seismogenic sources.

  16. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  17. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  18. Nitric oxide bioavailability in the microcirculation: insights from mathematical models.

    PubMed

    Tsoukias, Nikolaos M

    2008-11-01

    Over the last 30 years nitric oxide (NO) has emerged as a key signaling molecule involved in a number of physiological functions, including in the regulation of microcirculatory tone. Despite significant scientific contributions, fundamental questions about NO's role in the microcirculation remain unanswered. Mathematical modeling can assist in investigations of microcirculatory NO physiology and address experimental limitations in quantifying vascular NO concentrations. The number of mathematical models investigating the fate of NO in the vasculature has increased over the last few years, and new models are continuously emerging, incorporating an increasing level of complexity and detail. Models investigate mechanisms that affect NO availability in health and disease. They examine the significance of NO release from nonendothelial sources, the effect of transient release, and the complex interaction of NO with other substances, such as heme-containing proteins and reactive oxygen species. Models are utilized to test and generate hypotheses for the mechanisms that regulate NO-dependent signaling in the microcirculation.

  19. Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements

    NASA Astrophysics Data System (ADS)

    Yee, Eugene

    2007-04-01

    Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.

  20. Predicting thermally stressful events in rivers with a strategy to evaluate management alternatives

    USGS Publications Warehouse

    Maloney, K.O.; Cole, J.C.; Schmid, M.

    2016-01-01

    Water temperature is an important factor in river ecology. Numerous models have been developed to predict river temperature. However, many were not designed to predict thermally stressful periods. Because such events are rare, traditionally applied analyses are inappropriate. Here, we developed two logistic regression models to predict thermally stressful events in the Delaware River at the US Geological Survey gage near Lordville, New York. One model predicted the probability of an event >20.0 °C, and a second predicted an event >22.2 °C. Both models were strong (independent test data sensitivity 0.94 and 1.00, specificity 0.96 and 0.96) predicting 63 of 67 events in the >20.0 °C model and all 15 events in the >22.2 °C model. Both showed negative relationships with released volume from the upstream Cannonsville Reservoir and positive relationships with difference between air temperature and previous day's water temperature at Lordville. We further predicted how increasing release volumes from Cannonsville Reservoir affected the probabilities of correctly predicted events. For the >20.0 °C model, an increase of 0.5 to a proportionally adjusted release (that accounts for other sources) resulted in 35.9% of events in the training data falling below cutoffs; increasing this adjustment by 1.0 resulted in 81.7% falling below cutoffs. For the >22.2 °C these adjustments resulted in 71.1% and 100.0% of events falling below cutoffs. Results from these analyses can help managers make informed decisions on alternative release scenarios.

  1. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves.

    PubMed

    Ripepe, M; Barfucci, G; De Angelis, S; Delle Donne, D; Lacanna, G; Marchetti, E

    2016-11-10

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  2. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves

    PubMed Central

    Ripepe, M.; Barfucci, G.; De Angelis, S.; Delle Donne, D.; Lacanna, G.; Marchetti, E.

    2016-01-01

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models. PMID:27830768

  3. 26 CFR 514.8 - Release of excess tax withheld at source.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) REGULATIONS UNDER TAX CONVENTIONS FRANCE Withholding of Tax § 514.8 Release of excess tax withheld at source... in France, the withholding agent shall release and pay over to the person from whom the tax was... resident of France, or, in the case of a corporation, the owner was a French corporation; and (d) A...

  4. Potential release of fibers from burning carbon composites. [aircraft fires

    NASA Technical Reports Server (NTRS)

    Bell, V. L.

    1980-01-01

    A comprehensive experimental carbon fiber source program was conducted to determine the potential for the release of conductive carbon fibers from burning composites. Laboratory testing determined the relative importance of several parameters influencing the amounts of single fibers released, while large-scale aviation jet fuel pool fires provided realistic confirmation of the laboratory data. The dimensions and size distributions of fire-released carbon fibers were determined, not only for those of concern in an electrical sense, but also for those of potential interest from a health and environmental standpoint. Fire plume and chemistry studies were performed with large pool fires to provide an experimental input into an analytical modelling of simulated aircraft crash fires. A study of a high voltage spark system resulted in a promising device for the detection, counting, and sizing of electrically conductive fibers, for both active and passive modes of operation.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ringbom, Anders; Axelssson, A.; Aldener, M.

    Abstract: Observations of the radioxenon isotopes 133Xe and 131mXe collected at the IMS stations RN38 and RN58 on April 7-8, and April 12-13 2013, respectively, are unique with respect to the measurement history of these stations. Comparison of measured data with calculated isotopic ratios as well as analysis using atmospheric transport modeling indicate that it is likely that the xenon measured was created in the underground nuclear test conducted by North Korea on February 12, 2013, and released 7 weeks later. More than one release is required to explain all observations. The 131mXe source terms for each release were calculatedmore » to 7x1011 Bq, corresponding to about 1-10% of the total xenon inventory for a 10-kt explosion, depending on fractionation and release scenario. The observed ratios could not be used to obtain any information regarding the fissile material that was used in the test.« less

  6. pyNS: an open-source framework for 0D haemodynamic modelling.

    PubMed

    Manini, Simone; Antiga, Luca; Botti, Lorenzo; Remuzzi, Andrea

    2015-06-01

    A number of computational approaches have been proposed for the simulation of haemodynamics and vascular wall dynamics in complex vascular networks. Among them, 0D pulse wave propagation methods allow to efficiently model flow and pressure distributions and wall displacements throughout vascular networks at low computational costs. Although several techniques are documented in literature, the availability of open-source computational tools is still limited. We here present python Network Solver, a modular solver framework for 0D problems released under a BSD license as part of the archToolkit ( http://archtk.github.com ). As an application, we describe patient-specific models of the systemic circulation and detailed upper extremity for use in the prediction of maturation after surgical creation of vascular access for haemodialysis.

  7. Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models

    NASA Astrophysics Data System (ADS)

    Chang, Joseph C.

    This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on observational data from areas outside of Iraq, and using the global fields simulated by the global meteorological models as the initial and boundary conditions for the mesoscale models. It was found that while comparing model predictions to observations in areas outside of Iraq, the predicted surface wind directions had errors between 30 to 90 deg, but the inter-model differences (or uncertainties) in the predicted surface wind directions inside Iraq, where there were no onsite data, were fairly constant at about 70 deg. (Abstract shortened by UMI.)

  8. Particulate-phase mercury emissions from biomass burning and impact on resulting deposition: a modelling assessment

    EPA Science Inventory

    Mercury (Hg) emissions from biomass burning (BB) are an important source of atmospheric Hg and a major factor driving the interannual variation of Hg concentrations in the troposphere. The greatest fraction of Hg from BB is released in the form of elemental Hg (Hg0(g)). However, ...

  9. Implications of matrix diffusion on 1,4-dioxane persistence at contaminated groundwater sites.

    PubMed

    Adamson, David T; de Blanc, Phillip C; Farhat, Shahla K; Newell, Charles J

    2016-08-15

    Management of groundwater sites impacted by 1,4-dioxane can be challenging due to its migration potential and perceived recalcitrance. This study examined the extent to which 1,4-dioxane's persistence was subject to diffusion of mass into and out of lower-permeability zones relative to co-released chlorinated solvents. Two different release scenarios were evaluated within a two-layer aquifer system using an analytical modeling approach. The first scenario simulated a 1,4-dioxane and 1,1,1-TCA source zone where spent solvent was released. The period when 1,4-dioxane was actively loading the low-permeability layer within the source zone was estimated to be <3years due to its high effective solubility. While this was approximately an order-of-magnitude shorter than the loading period for 1,1,1-TCA, the mass of 1,4-dioxane stored within the low-permeability zone at the end of the simulation period (26kg) was larger than that predicted for 1,1,1-TCA (17kg). Even 80years after release, the aqueous 1,4-dioxane concentration was still several orders-of-magnitude higher than potentially-applicable criteria. Within the downgradient plume, diffusion contributed to higher concentrations and enhanced penetration of 1,4-dioxane into the low-permeability zones relative to 1,1,1-TCA. In the second scenario, elevated 1,4-dioxane concentrations were predicted at a site impacted by migration of a weak source from an upgradient site. Plume cutoff was beneficial because it could be implemented in time to prevent further loading of the low-permeability zone at the downgradient site. Overall, this study documented that 1,4-dioxane within transmissive portions of the source zone is quickly depleted due to characteristics that favor both diffusion-based storage and groundwater transport, leaving little mass to treat using conventional means. Furthermore, the results highlight the differences between 1,4-dioxane and chlorinated solvent source zones, suggesting that back diffusion of 1,4-dioxane mass may be serving as the dominant long-term "secondary source" at many contaminated sites that must be managed using alternative approaches. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Characterization of elemental release during microbe granite interactions at T = 28 °C

    NASA Astrophysics Data System (ADS)

    Wu, Lingling; Jacobson, Andrew D.; Hausner, Martina

    2008-02-01

    This study used batch reactors to characterize the mechanisms and rates of elemental release (Al, Ca, K, Mg, Na, F, Fe, P, Sr, and Si) during interaction of a single bacterial species ( Burkholderia fungorum) with granite at T = 28 °C for 35 days. The objective was to evaluate how actively metabolizing heterotrophic bacteria might influence granite weathering on the continents. We supplied glucose as a C source, either NH 4 or NO 3 as N sources, and either dissolved PO 4 or trace apatite in granite as P sources. Cell growth occurred under all experimental conditions. However, solution pH decreased from ˜7 to 4 in NH 4-bearing reactors, whereas pH remained near-neutral in NO 3-bearing reactors. Measurements of dissolved CO 2 and gluconate together with mass-balances for cell growth suggest that pH lowering in NH 4-bearing reactors resulted from gluconic acid release and H + extrusion during NH 4 uptake. In NO 3-bearing reactors, B. fungormum likely produced gluconic acid and consumed H + simultaneously during NO 3 utilization. Over the entire 35-day period, NH 4-bearing biotic reactors yielded the highest release rates for all elements considered. However, chemical analyses of biomass show that bacteria scavenged Na, P, and Sr during growth. Abiotic control reactors followed different reaction paths and experienced much lower elemental release rates compared to biotic reactors. Because release rates inversely correlate with pH, we conclude that proton-promoted dissolution was the dominant reaction mechanism. Solute speciation modeling indicates that formation of Al-F and Fe-F complexes in biotic reactors may have enhanced mineral solubilities and release rates by lowering Al and Fe activities. Mass-balances further reveal that Ca-bearing trace phases (calcite, fluorite, and fluorapatite) provided most of the dissolved Ca, whereas more abundant phases (plagioclase) contributed negligible amounts. Our findings imply that during the incipient stages of granite weathering, heterotrophic bacteria utilizing glucose and NH 4 only moderately elevate silicate weathering reactions that consume atmospheric CO 2. However, by enhancing the dissolution of non-silicate, Ca-bearing trace minerals, they could contribute to high Ca/Na ratios commonly observed in granitic watersheds.

  11. Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ecale Zhou, Carol

    2016-01-03

    This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication

  12. HerMES: point source catalogues from Herschel-SPIRE observations II

    NASA Astrophysics Data System (ADS)

    Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.

    2014-11-01

    The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).

  13. Linear Free Energy Correlations for Fission Product Release from the Fukushima-Daiichi Nuclear Accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrecht, David G.; Schwantes, Jon M.

    This paper extends the preliminary linear free energy correlations for radionuclide release performed by Schwantes, et al., following the Fukushima-Daiichi Nuclear Power Plant accident. Through evaluations of the molar fractionations of radionuclides deposited in the soil relative to modeled radionuclide inventories, we confirm the source of the radionuclides to be from active reactors rather than the spent fuel pool. Linear correlations of the form ln χ = -α (ΔG rxn°(T C))/(RT C)+β were obtained between the deposited concentration and the reduction potential of the fission product oxide species using multiple reduction schemes to calculate ΔG° rxn(T C). These models allowedmore » an estimate of the upper bound for the reactor temperatures of T C between 2130 K and 2220 K, providing insight into the limiting factors to vaporization and release of fission products during the reactor accident. Estimates of the release of medium-lived fission products 90Sr, 121mSn, 147Pm, 144Ce, 152Eu, 154Eu, 155Eu, 151Sm through atmospheric venting and releases during the first month following the accident were performed, and indicate large quantities of 90Sr and radioactive lanthanides were likely to remain in the damaged reactor cores.« less

  14. CIRMIS Data system. Volume 2. Program listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for utilization by the hydraulic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required.The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the second of four volumes of the description of the CIRMIS Data System.« less

  15. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W.; Levetin, E.; Huete, A.; Nickovic, S.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P.; Budge, A.; Hudspeth, W.; hide

    2012-01-01

    Juniperus spp. pollen is a significant aeroallergen that can be transported 200-600 km from the source. Local observations of Juniperus spp. phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. Methods: The Dust REgional Atmospheric Model (DREAM)is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust. We successfully modified the DREAM model to incorporate pollen transport (PREAM) and used MODIS satellite images to develop Juniperus ashei pollen input source masks. The Pollen Release Potential Source Map, also referred to as a source mask in model applications, may use different satellite platforms and sensors and a variety of data sets other than the USGS GAP data we used to map J. ashei cover type. MODIS derived percent tree cover is obtained from MODIS Vegetation Continuous Fields (VCF) product (collection 3 and 4, MOD44B, 500 and 250 m grid resolution). We use updated 2010 values to calculate pollen concentration at source (J. ashei ). The original MODIS derived values are converted from native approx. 250 m to 990m (approx. 1 km) for the calculation of a mask to fit the model (PREAM) resolution. Results: The simulation period is chosen following the information that in the last 2 weeks of December 2010. The PREAM modeled near-surface concentrations (Nm-3) shows the transport patterns of J. ashei pollen over a 5 day period (Fig. 2). Typical scales of the simulated transport process are regional.

  16. Focusing fluids towards the arc: the role of rheology and reactions

    NASA Astrophysics Data System (ADS)

    Wilson, C. R.; Spiegelman, M. W.

    2014-12-01

    Aqueous fluids released from the down-going slab in subduction zones are generally thought to be the cause of arc volcanism. However there is a significant discrepancy between the consistent location of the volcanic front with respect to intermediate depth earthquakes (e.g. 100+/-40 km; England et al., GJI, 2004, Syracuse & Abers, G-cubed, 2006) and the large depth range over which dehydration reactions are predicted to occur in the slab (e.g. 80-250 km; van Keken et al., JGR, 2011). By coupling the fluid flow to the solid rheology through compaction pressure, recent numerical models (Wilson et al., EPSL, 2014) demonstrated a number of focusing mechanisms that can be invoked to explain this apparent discrepancy. Most notable among these were permeability channels within the slab. These were shown to be highly effective in transporting fluid from deeper fluid sources along the slab towards the shallowest source. In the presence of these channels the majority of the fluid is released into the mantle wedge far shallower and closer to the arc than it was originally generated. While observations consistent with free fluids in the slab have been reported (e.g. Shiina et al., GRL, 2013), it is possible that changing the rheology and reactivity of the slab can change the efficiency of in-slab transport. We present a series of simplified model problems of fluid flow within the slab and mantle wedge demonstrating the potential effect of these processes on fluid flux. In particular, pseudo-1D models show that if fluids can efficiently rehydrate slab minerals, then these reactions can shut down fluid pathways within the slab, resulting in deeper release of fluid into the mantle wedge. However, the behavior in full subduction zone models remains to be determined.

  17. OrganoRelease - A framework for modeling the release of organic chemicals from the use and post-use of consumer products.

    PubMed

    Tao, Mengya; Li, Dingsheng; Song, Runsheng; Suh, Sangwon; Keller, Arturo A

    2018-03-01

    Chemicals in consumer products have become the focus of recent regulatory developments including California's Safer Consumer Products Act. However, quantifying the amount of chemicals released during the use and post-use phases of consumer products is challenging, limiting the ability to understand their impacts. Here we present a comprehensive framework, OrganoRelease, for estimating the release of organic chemicals from the use and post-use of consumer products given limited information. First, a novel Chemical Functional Use Classifier estimates functional uses based on chemical structure. Second, the quantity of chemicals entering different product streams is estimated based on market share data of the chemical functional uses. Third, chemical releases are estimated based on either chemical product categories or functional uses by using the Specific Environmental Release Categories and EU Technological Guidance Documents. OrganoRelease connects 19 unique functional uses and 14 product categories across 4 data sources and provides multiple pathways for chemical release estimation. Available user information can be incorporated in the framework at various stages. The Chemical Functional Use Classifier achieved an average accuracy above 84% for nine functional uses, which enables the OrganoRelease to provide release estimates for the chemical, mostly using only the molecular structure. The results can be can be used as input for methods estimating environmental fate and exposure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Valley formation by groundwater seepage, pressurized groundwater outbursts and crater-lake overflow in flume experiments with implications for Mars

    NASA Astrophysics Data System (ADS)

    Marra, Wouter A.; Braat, Lisanne; Baar, Anne W.; Kleinhans, Maarten G.

    2014-04-01

    Remains of fluvial valleys on Mars reveal the former presence of water on the surface. However, the source of water and the hydrological setting is not always clear, especially in types of valleys that are rare on Earth and where we have limited knowledge of the processes involved. We investigated three hydrological scenarios for valley formation on Mars: hydrostatic groundwater seepage, release of pressurized groundwater and crater-lake overflow. Using physical modeling in laboratory experiments and numerical hydrological modeling we quantitatively studied the morphological development and processes involved in channel formation that result from these different sources of water in unconsolidated sediment. Our results show that valleys emerging from seeping groundwater by headward erosion form relatively slowly as fluvial transport takes place in a channel much smaller than the valley. Pressurized groundwater release forms a characteristic source area at the channel head by fluidization processes. This head consist of a pit in case of superlithostatic pressure and may feature small radial channels and collapse features. Valleys emerging from a crater-lake overflow event develop quickly in a run-away process of rim erosion and discharge increase. The valley head at the crater outflow point has a converging fan shape, and the rapid incision of the rim leaves terraces and collapse features. Morphological elements observed in the experiments can help in identifying the formative processes on Mars, when considerations of experimental scaling and lithological characteristics of the martian surface are taken into account. These morphological features might reveal the associated hydrological settings and formative timescales of a valley. An estimate of formative timescale from sediment transport is best based on the final channel dimensions for groundwater seepage valleys and on the valley dimensions for pressurized groundwater release and crater-lake overflow valleys. Our experiments show that different sources of water form valleys of similar size in quite different timescales.

  19. Measuring and mitigating agricultural greenhouse gas production in the US Great Plains, 1870-2000.

    PubMed

    Parton, William J; Gutmann, Myron P; Merchant, Emily R; Hartman, Melannie D; Adler, Paul R; McNeal, Frederick M; Lutz, Susan M

    2015-08-25

    The Great Plains region of the United States is an agricultural production center for the global market and, as such, an important source of greenhouse gas (GHG) emissions. This article uses historical agricultural census data and ecosystem models to estimate the magnitude of annual GHG fluxes from all agricultural sources (e.g., cropping, livestock raising, irrigation, fertilizer production, tractor use) in the Great Plains from 1870 to 2000. Here, we show that carbon (C) released during the plow-out of native grasslands was the largest source of GHG emissions before 1930, whereas livestock production, direct energy use, and soil nitrous oxide emissions are currently the largest sources. Climatic factors mediate these emissions, with cool and wet weather promoting C sequestration and hot and dry weather increasing GHG release. This analysis demonstrates the long-term ecosystem consequences of both historical and current agricultural activities, but also indicates that adoption of available alternative management practices could substantially mitigate agricultural GHG fluxes, ranging from a 34% reduction with a 25% adoption rate to as much as complete elimination with possible net sequestration of C when a greater proportion of farmers adopt new agricultural practices.

  20. Measuring and mitigating agricultural greenhouse gas production in the US Great Plains, 1870–2000

    PubMed Central

    Parton, William J.; Gutmann, Myron P.; Merchant, Emily R.; Hartman, Melannie D.; Adler, Paul R.; McNeal, Frederick M.; Lutz, Susan M.

    2015-01-01

    The Great Plains region of the United States is an agricultural production center for the global market and, as such, an important source of greenhouse gas (GHG) emissions. This article uses historical agricultural census data and ecosystem models to estimate the magnitude of annual GHG fluxes from all agricultural sources (e.g., cropping, livestock raising, irrigation, fertilizer production, tractor use) in the Great Plains from 1870 to 2000. Here, we show that carbon (C) released during the plow-out of native grasslands was the largest source of GHG emissions before 1930, whereas livestock production, direct energy use, and soil nitrous oxide emissions are currently the largest sources. Climatic factors mediate these emissions, with cool and wet weather promoting C sequestration and hot and dry weather increasing GHG release. This analysis demonstrates the long-term ecosystem consequences of both historical and current agricultural activities, but also indicates that adoption of available alternative management practices could substantially mitigate agricultural GHG fluxes, ranging from a 34% reduction with a 25% adoption rate to as much as complete elimination with possible net sequestration of C when a greater proportion of farmers adopt new agricultural practices. PMID:26240366

  1. Atmospheric plume progression as a function of time and distance from the release point for radioactive isotopes.

    PubMed

    Eslinger, Paul W; Bowyer, Ted W; Cameron, Ian M; Hayes, James C; Miley, Harry S

    2015-10-01

    The radionuclide network of the International Monitoring System comprises up to 80 stations around the world that have aerosol and xenon monitoring systems designed to detect releases of radioactive materials to the atmosphere from nuclear explosions. A rule of thumb description of plume concentration and duration versus time and distance from the release point is useful when designing and deploying new sample collection systems. This paper uses plume development from atmospheric transport modeling to provide a power-law rule describing atmospheric dilution factors as a function of distance from the release point. Consider the plume center-line concentration seen by a ground-level sampler as a function of time based on a short-duration ground-level release of a nondepositing radioactive tracer. The concentration C (Bq m(-3)) near the ground varies with distance from the source with the relationship C=R×A(D,C) ×e (-λ(-1.552+0.0405×D)) × 5.37×10(-8) × D(-2.35) where R is the release magnitude (Bq), D is the separation distance (km) from the ground level release to the measurement location, λ is the decay constant (h(-1)) for the radionuclide of interest and AD,C is an attenuation factor that depends on the length of the sample collection period. This relationship is based on the median concentration for 10 release locations with different geographic characteristics and 365 days of releases at each location, and it has an R(2) of 0.99 for 32 distances from 100 to 3000 km. In addition, 90 percent of the modeled plumes fall within approximately one order of magnitude of this curve for all distances. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. An object-oriented software for fate and exposure assessments.

    PubMed

    Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M

    1995-07-01

    The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.

  3. Tracer dispersion experiments carried out in London during 2003 and 2004 as part of the DAPPLE project

    NASA Astrophysics Data System (ADS)

    Martin, D.; Shallcross, D.; Nickless, G.; White, I.

    2005-12-01

    Transport, dispersion and ultimate fate of pollutants has very important implications for the environment at the urban, regional and global scales. Localised emissions of both man-made and naturally produced pollutants can both directly and indirectly impact the health of the inhabitants. The DAPPLE (Dispersion of Air Pollutants and their Penetration into the Local Environment) consortium consists of six universities, which comprises of a multidisciplinary approach to study relatively small-scale urban atmospheric dispersion. Wind tunnel modelling studies, computer fluid dynamical simulations, fieldwork studies using tracers and dispersion modelling were all carried out in an attempt to achieve this. In this paper we report on tracer dispersion experiments carried out in May 2003 and June 2004. These involve the release of various perfluorocarbon (PFC) tracers centred on Marylebone Road in London. These compounds are inert, non-reactive and have a very low atmospheric background concentration with little variability. These properties make them the ideal atmospheric tracer and this combined with an ultra sensitive analytical technique (sample pre-concentration on carbon based adsorbents followed with detection by Negative Ion Chemical Ionization Mass Spectrometry) makes very small release amounts feasible. The source-receptor relationship is studied for various source and receptor positions and distances. Source receptor relationships for both rooftop and indoor positions were evaluated as part of the project. Results of concurrent meteorological measurements are also presented as well as comparison with a number of simple dispersion models.

  4. Translocation and early post-release demography of endangered Laysan teal

    USGS Publications Warehouse

    Reynolds, M.H.; Seavy, N.E.; Vekasy, M.S.; Klavitter, J.L.; Laniawe, L.P.

    2008-01-01

    In an attempt to reduce the high extinction risk inherent to small island populations, we translocated wild Laysan teal Anas laysanensis to a portion of its presumed prehistoric range. Most avian translocations lack the strategic post-release monitoring needed to assess early population establishment or failure. Therefore, we monitored the survival and reproduction of all founders, and their first-generation offspring using radio telemetry for 2 years after the first release. Forty-two Laysan teal were sourced directly from the only extant population on Laysan Island and transported 2 days by ship to Midway Atoll. All birds survived the translocation with nutritional and veterinary support, and spent between 4 and 14 days in captivity. Post-release survival of 42 founders was 0.857 (95% CI 0.86-0.99) during 2004-2006 or annualized 0.92 (95% CI 0.83-0.98). Seventeen of 18 founding hens attempted nesting in the first two breeding seasons. Fledgling success was 0.57 (95% CI 0.55-0.60) in 2005 and 0.63 (95% CI 0.62-0.64) in 2006. The effective founding female population (Ne) was 13. We applied these initial demographic rates to model population growth. The nascent population size increased to >100 after only 2 years post-release (?? = 1.73). If this growth rate continues, the size of the Midway population could surpass the source population before 2010. ?? 2008 The Authors. Journal compilation ?? 2008 The Zoological Society of London.

  5. Testing high resolution numerical models for analysis of contaminant storage and release from low permeability zones.

    PubMed

    Chapman, Steven W; Parker, Beth L; Sale, Tom C; Doner, Lee Ann

    2012-08-01

    It is now widely recognized that contaminant release from low permeability zones can sustain plumes long after primary sources are depleted, particularly for chlorinated solvents where regulatory limits are orders of magnitude below source concentrations. This has led to efforts to appropriately characterize sites and apply models for prediction incorporating these effects. A primary challenge is that diffusion processes are controlled by small-scale concentration gradients and capturing mass distribution in low permeability zones requires much higher resolution than commonly practiced. This paper explores validity of using numerical models (HydroGeoSphere, FEFLOW, MODFLOW/MT3DMS) in high resolution mode to simulate scenarios involving diffusion into and out of low permeability zones: 1) a laboratory tank study involving a continuous sand body with suspended clay layers which was 'loaded' with bromide and fluorescein (for visualization) tracers followed by clean water flushing, and 2) the two-layer analytical solution of Sale et al. (2008) involving a relatively simple scenario with an aquifer and underlying low permeability layer. All three models are shown to provide close agreement when adequate spatial and temporal discretization are applied to represent problem geometry, resolve flow fields and capture advective transport in the sands and diffusive transfer with low permeability layers and minimize numerical dispersion. The challenge for application at field sites then becomes appropriate site characterization to inform the models, capturing the style of the low permeability zone geometry and incorporating reasonable hydrogeologic parameters and estimates of source history, for scenario testing and more accurate prediction of plume response, leading to better site decision making. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. The release of dissolved nutrients and metals from coastal sediments due to resuspension

    USGS Publications Warehouse

    Kalnejais, Linda H.; Martin, William R.; Bothner, Michael H.

    2010-01-01

    Coastal sediments in many regions are impacted by high levels of contaminants. Due to a combination of shallow water depths, waves, and currents, these sediments are subject to regular episodes of sediment resuspension. However, the influence of such disturbances on sediment chemistry and the release of solutes is poorly understood. The aim of this study is to quantify the release of dissolved metals (iron, manganese, silver, copper, and lead) and nutrients due to resuspension in Boston Harbor, Massachusetts, USA. Using a laboratory-based erosion chamber, a range of typical shear stresses was applied to fine-grained Harbor sediments and the solute concentration at each shear stress was measured. At low shear stress, below the erosion threshold, limited solutes were released. Beyond the erosion threshold, a release of all solutes, except lead, was observed and the concentrations increased with shear stress. The release was greater than could be accounted for by conservative mixing of porewaters into the overlying water, suggesting that sediment resuspension enhances the release of nutrients and metals to the dissolved phase. To address the long-term fate of resuspended particles, samples from the erosion chamber were maintained in suspension for 90. h. Over this time, 5-7% of the particulate copper and silver was released to the dissolved phase, while manganese was removed from solution. Thus resuspension releases solutes both during erosion events and over a longer timescale due to reactions of suspended particles in the water column. The magnitude of the annual solute release during erosion events was estimated by coupling the erosion chamber results with a record of bottom shear stresses simulated by a hydrodynamic model. The release of dissolved copper, lead, and phosphate due to resuspension is between 2% and 10% of the total (dissolved plus particulate phase) known inputs to Boston Harbor. Sediment resuspension is responsible for transferring a significant quantity of solid phase metals to the more bioavailable and mobile dissolved phase. The relative importance of sediment resuspension as a source of dissolved metals to Boston Harbor is expected to increase as continuing pollutant control decreases the inputs from other sources. ?? 2010 Elsevier B.V.

  7. [Case study of red water phenomenon in drinking water distribution systems caused by water source switch].

    PubMed

    Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Pan, An-jun; Xu, Yang; Liao, Ping-an; Zhang, Su-xia; Gu, Jun-nong

    2009-12-01

    Red water phenomenon occurred in some communities of a city in China after water source switch in recent days. The origin of this red water problem and mechanism of iron release were investigated in the study. Water quality of local and new water sources was tested and tap water quality in suffered area had been monitored for 3 months since red water occurred. Interior corrosion scales on the pipe which was obtained from the suffered area were analyzed by XRD, SEM, and EDS. Corrosion rates of cast iron under the conditions of two source water were obtained by Annular Reactor. The influence of different source water on iron release was studied by pipe section reactor to simulate the distribution systems. The results indicated that large increase of sulfate concentration by water source shift was regarded as the cause of red water problem. The Larson ratio increased from about 0.4 to 1.7-1.9 and the red water problem happened in the taps of some urban communities just several days after the new water source was applied. The mechanism of iron release was concluded that the stable shell of scales in the pipes had been corrupted by this kind of high-sulfate-concentration source water and it was hard to recover soon spontaneously. The effect of sulfate on iron release of the old cast iron was more significant than its effect on enhancing iron corrosion. The rate of iron release increased with increasing Larson ratio, and the correlation of them was nonlinear on the old cast-iron. The problem remained quite a long time even if the water source re-shifted into the blended one with only small ratio of the new source and the Larson ratio reduced to about 0.6.

  8. Update to An Inventory of Sources and Environmental ...

    EPA Pesticide Factsheets

    In 2006, EPA published an inventory of sources and environmental releases of dioxin-like compounds in the United States. This draft report presents an update and revision to that dioxin source inventory. It also presents updated estimates of environmental releases of dioxin-like compounds to the air, water, land and products. The sources are grouped into five broad categories: combustion sources, metals smelting/refining, chemical manufacturing, natural sources, and environmental reservoirs. Estimates of annual releases to land, air, and water are presented for reference years 1987, 1995, and 2000. While the overall decreasing trend in emissions seen in the original report continues, the individual dioxin releases in this draft updated report are generally higher than the values reported in 2006. This is largely due to the inclusion (in all three years) of additional sources in the quantitative inventory that were not included in the 2006 report. The largest new source included in this draft updated inventory was forest fires. In the 2006 report, this was classified as preliminary and not included in the quantitative inventory. The top three air sources of dioxin emissions in 2000 were forest fires, backyard burning of trash, and medical waste incinerators. The Report Presents An Update To The Dioxin Source Inventory Published In 2006 (U.S. Epa, 2006). The Peer-Review Panel For The 2006 Document Provided Additional Comments After The Final Report Had

  9. International challenge to predict the impact of radioxenon releases from medical isotope production on a comprehensive nuclear test ban treaty sampling station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Bowyer, Ted W.; Achim, Pascal

    Abstract The International Monitoring System (IMS) is part of the verification regime for the Comprehensive Nuclear-Test-Ban-Treaty Organization (CTBTO). At entry-into-force, half of the 80 radionuclide stations will be able to measure concentrations of several radioactive xenon isotopes produced in nuclear explosions, and then the full network may be populated with xenon monitoring afterward (Bowyer et al., 2013). Fission-based production of 99Mo for medical purposes also releases radioxenon isotopes to the atmosphere (Saey, 2009). One of the ways to mitigate the effect of emissions from medical isotope production is the use of stack monitoring data, if it were available, so thatmore » the effect of radioactive xenon emissions could be subtracted from the effect from a presumed nuclear explosion, when detected at an IMS station location. To date, no studies have addressed the impacts the time resolution or data accuracy of stack monitoring data have on predicted concentrations at an IMS station location. Recently, participants from seven nations used atmospheric transport modeling to predict the time-history of 133Xe concentration measurements at an IMS station in Germany using stack monitoring data from a medical isotope production facility in Belgium. Participants received only stack monitoring data and used the atmospheric transport model and meteorological data of their choice. Some of the models predicted the highest measured concentrations quite well (a high composite statistical model comparison rank or a small mean square error with the measured values). The results suggest release data on a 15 min time spacing is best. The model comparison rank and ensemble analysis suggests that combining multiple models may provide more accurate predicted concentrations than any single model. Further research is needed to identify optimal methods for selecting ensemble members and those methods may depend on the specific transport problem. None of the submissions based only on the stack monitoring data predicted the small measured concentrations very well. The one submission that best predicted small concentrations also included releases from nuclear power plants. Modeling of sources by other nuclear facilities with smaller releases than medical isotope production facilities may be important in discriminating those releases from releases from a nuclear explosion.« less

  10. A Temperate Alpine Glacier as a Reservoir of Polychlorinated Biphenyls: Model Results of Incorporation, Transport, and Release.

    PubMed

    Steinlin, Christine; Bogdal, Christian; Lüthi, Martin P; Pavlova, Pavlina A; Schwikowski, Margit; Zennegg, Markus; Schmid, Peter; Scheringer, Martin; Hungerbühler, Konrad

    2016-06-07

    In previous studies, the incorporation of polychlorinated biphenyls (PCBs) has been quantified in the accumulation areas of Alpine glaciers. Here, we introduce a model framework that quantifies mass fluxes of PCBs in glaciers and apply it to the Silvretta glacier (Switzerland). The models include PCB incorporation into the entire surface of the glacier, downhill transport with the flow of the glacier ice, and chemical fate in the glacial lake. The models are run for the years 1900-2100 and validated by comparing modeled and measured PCB concentrations in an ice core, a lake sediment core, and the glacial streamwater. The incorporation and release fluxes, as well as the storage of PCBs in the glacier increase until the 1980s and decrease thereafter. After a temporary increase in the 2000s, the future PCB release and the PCB concentrations in the glacial stream are estimated to be small but persistent throughout the 21st century. This study quantifies all relevant PCB fluxes in and from a temperate Alpine glacier over two centuries, and concludes that Alpine glaciers are a small secondary source of PCBs, but that the aftermath of environmental pollution by persistent and toxic chemicals can endure for decades.

  11. Particles from wood smoke and traffic induce differential pro-inflammatory response patterns in co-cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocbach, Anette; Herseth, Jan Inge; Lag, Marit

    2008-10-15

    The inflammatory potential of particles from wood smoke and traffic has not been well elucidated. In this study, a contact co-culture of monocytes and pneumocytes was exposed to 10-40 {mu}g/cm{sup 2} of particles from wood smoke and traffic for 12, 40 and 64 h to determine their influence on pro-inflammatory cytokine release (TNF-{alpha}, IL-1, IL-6, IL-8) and viability. To investigate the role of organic constituents in cytokine release the response to particles, their organic extracts and the washed particles were compared. Antagonists were used to investigate source-dependent differences in intercellular signalling (TNF-{alpha}, IL-1). The cytotoxicity was low after exposure tomore » particles from both sources. However, wood smoke, and to a lesser degree traffic-derived particles, induced a reduction in cell number, which was associated with the organic fraction. The release of pro-inflammatory cytokines was similar for both sources after 12 h, but traffic induced a greater release than wood smoke particles with increasing exposure time. The organic fraction accounted for the majority of the cytokine release induced by wood smoke, whereas the washed traffic particles induced a stronger response than the corresponding organic extract. TNF-{alpha} and IL-1 antagonists reduced the release of IL-8 induced by particles from both sources. In contrast, the IL-6 release was only reduced by the IL-1 antagonist during exposure to traffic-derived particles. In summary, particles from wood smoke and traffic induced differential pro-inflammatory response patterns with respect to cytokine release and cell number. Moreover, the influence of the organic particle fraction and intercellular signalling on the pro-inflammatory response seemed to be source-dependent.« less

  12. Detection of Evolved Carbon Dioxide in the Rocknest Eolian Bedform by the Sample Analysis at Mars(SAM) Instrument at the Mars Curiosity Landing Site

    NASA Technical Reports Server (NTRS)

    Sutter, B.; Archer, D.; McAdam, A.; Franz, H.; Ming, D. W.; Eigenbrode, J. L.; Glavin, D. P.; Mahaffy, P.; Stern, J.; Navarro-Gonzalez, R.

    2013-01-01

    The Sample Analysis at Mars (SAM) instrument detected four releases of carbon dioxide (CO2) that ranged from 100 to 700 C from the Rocknest eolian bedform material (Fig. 1). Candidate sources of CO2 include adsorbed CO2, carbonate(s), combusted organics that are either derived from terrestrial contamination and/or of martian origin, occluded or trapped CO2, and other sources that have yet to be determined. The Phoenix Lander s Thermal Evolved Gas Analyzer (TEGA) detected two CO2 releases (400-600, 700-840 C) [1,2]. The low temperature release was attributed to Fe- and/or Mg carbonates [1,2], per-chlorate interactions with carbonates [3], nanophase carbonates [4] and/or combusted organics [1]. The high temperature CO2 release was attributed to a calcium bearing carbonate [1,2]. No evidence of a high temperature CO2 release similar to the Phoenix material was detected in the Rocknest materials by SAM. The objectives of this work are to evaluate the temperature and total contribution of each Rocknest CO2 release and their possible sources. Four CO2 releases from the Rocknest material were detected by SAM. Potential sources of CO2 are adsorbed CO2, (peak 1) and Fe/Mg carbonates (peak 4). Only a fraction of peaks 2 and 3 (0.01 C wt.%) may be partially attributed to combustion of organic contamination. Meteoritic organics mixed in the Rocknest bedform could be present, but the peak 2 and 3 C concentration (approx.0.21 C wt. %) is likely too high to be attributed solely to meteoritic organic C. Other inorganic sources of C such as interactions of perchlorates and carbonates and sources yet to be identified will be evaluated to account for CO2 released from the thermal decomposition of Rocknest material.

  13. Soil HONO Emissions and Its Potential Impact on the Atmospheric Chemistry and Nitrogen Cycle

    NASA Astrophysics Data System (ADS)

    Su, H.; Chen, C.; Zhang, Q.; Poeschl, U.; Cheng, Y.

    2014-12-01

    Hydroxyl radicals (OH) are a key species in atmospheric photochemistry. In the lower atmosphere, up to ~30% of the primary OH radical production is attributed to the photolysis of nitrous acid (HONO), and field observations suggest a large missing source of HONO. The dominant sources of N(III) in soil, however, are biological nitrification and denitrification processes, which produce nitrite ions from ammonium (by nitrifying microbes) as well as from nitrate (by denitrifying microbes). We show that soil nitrite can release HONO and explain the reported strength and diurnal variation of the missing source. The HONO emissions rates are estimated to be comparable to that of nitric oxide (NO) and could be an important source of atmospheric reactive nitrogen. Fertilized soils appear to be particularly strong sources of HONO. Thus, agricultural activities and land-use changes may strongly influence the oxidizing capacity of the atmosphere. A new HONO-DNDC model was developed to simulate the evolution of HONO emissions in agriculture ecosystems. Because of the widespread occurrence of nitrite-producing microbes and increasing N and acid deposition, the release of HONO from soil may also be important in natural environments, including forests and boreal regions. Reference: Su, H. et al., Soil Nitrite as a Source of Atmospheric HONO and OH Radicals, Science, 333, 1616-1618, 10.1126/science.1207687, 2011.

  14. Diurnal and seasonal variation of various carbon fluxes from an urban tower platform in Houston, TX

    NASA Astrophysics Data System (ADS)

    Schade, G. W.; Werner, N.; Hale, M. C.

    2013-12-01

    We measured carbon fluxes (CO2, CO, VOCs) from a tall lattice tower in Houston between 2007 and 2009, and 2011-2013. We present results from various analyses of (i) anthropogenic and biogenic CO2 fluxes using a quadrant segregation technique, (ii) seasonal and multi-year changes of CO fluxes as related to car traffic and industrial sources, and (iii) the accuracy of, and usefulness of a bulk flux footprint model to quantify pentane emissions form a distant source in comparison to permitted emission levels. Segregated and net anthropogenic CO2 fluxes were dominated by car traffic but industrial sources were identified as well. Emissions sank to minimal levels after hurricane Ike had passed over Houston, causing a traffic shutdown and lower population density. Segregated biogenic fluxes showed a clear seasonal variation with photosynthetic activity between April and November, and large effects of the 2011 Texas drought due to negligible irrigation in the study area. Carbon monoxide fluxes, measured via a flux gradient technique, are even stronger dominated by car traffic than CO2 fluxes and serve as a traffic tracer. Our data show a continued drop in emissions over time, seasonal changes with higher emissions during winter, and local influences due to industrial emissions. Lastly, we present the results of a tracer release study and a single point source quantification to test a bulk footprint model in this complex urban area. Known releases of volatile acetone and MEK were compered to measured fluxes using a REA-GC-FID system, and permit emissions of pentane from a foam plastics manufacturing facility were compared to measured pentane fluxes. Both comparisons reveal a surprisingly accurate performance of the footprint model within a factor of 2.

  15. Chemical transport model simulations of organic aerosol in ...

    EPA Pesticide Factsheets

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data

  16. The Visible Human Data Sets (VHD) and Insight Toolkit (ITk): Experiments in Open Source Software

    PubMed Central

    Ackerman, Michael J.; Yoo, Terry S.

    2003-01-01

    From its inception in 1989, the Visible Human Project was designed as an experiment in open source software. In 1994 and 1995 the male and female Visible Human data sets were released by the National Library of Medicine (NLM) as open source data sets. In 2002 the NLM released the first version of the Insight Toolkit (ITk) as open source software. PMID:14728278

  17. The Central Role of Tether-Cutting Reconnection in the Production of CMEs

    NASA Technical Reports Server (NTRS)

    Moore, Ron; Sterling, Alphonse; Suess, Steve

    2007-01-01

    This viewgraph presentation describes tether-cutting reconnection in the production of Coronal Mass Ejections (CMEs). The topics include: 1) Birth and Release of the CME Plasmoid; 2) Resulting CME in Outer Corona; 3) Governing Role of Surrounding Field; 4) Testable Prediction of the Standard Scenario Magnetic Bubble CME Model; 5) Lateral Pressure in Outer Corona; 6) Measured Angular Widths of 3 CMEs; 7) LASCO Image of each CME at Final Width; 8) Source of the CME of 2002 May 20; 9) Source of the CME of 1999 Feb 9; 10) Source of the CME of 2003 Nov 4; and 11) Test Results.

  18. Linear free energy correlations for fission product release from the Fukushima-Daiichi nuclear accident.

    PubMed

    Abrecht, David G; Schwantes, Jon M

    2015-03-03

    This paper extends the preliminary linear free energy correlations for radionuclide release performed by Schwantes et al., following the Fukushima-Daiichi Nuclear Power Plant accident. Through evaluations of the molar fractionations of radionuclides deposited in the soil relative to modeled radionuclide inventories, we confirm the initial source of the radionuclides to the environment to be from active reactors rather than the spent fuel pool. Linear correlations of the form In χ = −α ((ΔGrxn°(TC))/(RTC)) + β were obtained between the deposited concentrations, and the reduction potentials of the fission product oxide species using multiple reduction schemes to calculate ΔG°rxn (TC). These models allowed an estimate of the upper bound for the reactor temperatures of TC between 2015 and 2060 K, providing insight into the limiting factors to vaporization and release of fission products during the reactor accident. Estimates of the release of medium-lived fission products 90Sr, 121mSn, 147Pm, 144Ce, 152Eu, 154Eu, 155Eu, and 151Sm through atmospheric venting during the first month following the accident were obtained, indicating that large quantities of 90Sr and radioactive lanthanides were likely to remain in the damaged reactor cores.

  19. A simple analytical method for determining the atmospheric dispersion of upward-directed high velocity releases

    NASA Astrophysics Data System (ADS)

    Palazzi, E.

    The evaluation of atmospheric dispersion of a cloud, arising from a sudden release of flammable or toxic materials, is an essential tool for properly designing flares, vents and other safety devices and to quantify the potential risk related to the existing ones or arising from the various kinds of accidents which can occur in chemical plants. Among the methods developed to treat the important case of upward-directed jets, Hoehne's procedure for determining the behaviour and extent of flammability zone is extensively utilized, particularly concerning petrochemical plants. In a previous study, a substantial simplification of the aforesaid procedure was achieved, by correlating the experimental data with an empirical formula, allowing to obtain a mathematical description of the boundaries of the flammable cloud. Following a theoretical approach, a most general model is developed in the present work, applicable to the various kinds of design problems and/or risk evaluation regarding upward-directed releases from high velocity sources. It is also demonstrated that the model gives conservative results, if applied outside the range of the Hoehne's experimental conditions. Moreover, with simple modifications, the same approach could be easily applied to deal with the atmospheric dispersion of anyhow directed releases.

  20. Competition of Invertebrates Mixed Culture in the Closed Aquatic System

    NASA Astrophysics Data System (ADS)

    Pisman, Tamara

    The study considers the experimental model of interactions between invertebrates (the cilates Paramecium caudatum, Paramecium bursaria and the rotifers Brachionis plicatilis) in the closed aquatic system. The infusoria P.caudatum can feed on yeast, bacteria and chlorella; in this experiment growth and reproduction were maintained by bacteria only. The P.bursaria - zoochlorella endosymbiosis is a natural model of a simple biotic cycle. P.bursaria consumes glucose and oxygen released by zoochlorella in the process of biosynthesis and releases nitrogenous compounds and carbon dioxide necessary for algal photosynthesis. The rotifers Br. plicatilis can consume algae, bacteria and detritus. Thus in experiment with the mixed culture of invertebrates they can use different food sources. However with any initial percentage of the invertebrates the end portion of P.bursaria reaches 90-99

  1. A mathematical model of diffusion from a steady source of short duration in a finite mixing layer

    NASA Astrophysics Data System (ADS)

    Bianconi, Roberto; Tamponi, Matteo

    This paper presents an analytical unsteady-state solution to the atmospheric dispersion equation for substances subject to chemical-physical decay in a finite mixing layer for releases of short duration. This solution is suitable for describing critical events relative to accidental release of toxic, flammable or explosive substances. To implement the solution, the Modello per Rilasci a Breve Termine (MRBT) code has been developed, for some characteristics parameters of which the results of the sensitivity analysis are presented. Moreover some examples of application to the calculation of exposure to toxic substances and to the determination of the ignition field of flammable substances are described. Finally, the mathematical model described can be used to interpret the phenomenon of pollutant accumulation.

  2. Building CHAOS: An Operating System for Livermore Linux Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garlick, J E; Dunlap, C M

    2003-02-21

    The Livermore Computing (LC) Linux Integration and Development Project (the Linux Project) produces and supports the Clustered High Availability Operating System (CHAOS), a cluster operating environment based on Red Hat Linux. Each CHAOS release begins with a set of requirements and ends with a formally tested, packaged, and documented release suitable for use on LC's production Linux clusters. One characteristic of CHAOS is that component software packages come from different sources under varying degrees of project control. Some are developed by the Linux Project, some are developed by other LC projects, some are external open source projects, and some aremore » commercial software packages. A challenge to the Linux Project is to adhere to release schedules and testing disciplines in a diverse, highly decentralized development environment. Communication channels are maintained for externally developed packages in order to obtain support, influence development decisions, and coordinate/understand release schedules. The Linux Project embraces open source by releasing locally developed packages under open source license, by collaborating with open source projects where mutually beneficial, and by preferring open source over proprietary software. Project members generally use open source development tools. The Linux Project requires system administrators and developers to work together to resolve problems that arise in production. This tight coupling of production and development is a key strategy for making a product that directly addresses LC's production requirements. It is another challenge to balance support and development activities in such a way that one does not overwhelm the other.« less

  3. Magmatic vapor source for sulfur dioxide released during volcanic eruptions: Evidence from Mount Pinatubo

    USGS Publications Warehouse

    Wallace, P.J.; Gerlach, T.M.

    1994-01-01

    Sulfur dioxide (SO2) released by the explosive eruption of Mount Pinatubo on 15 June 1991 had an impact on climate and stratospheric ozone. The total mass of SO2 released was much greater than the amount dissolved in the magma before the eruption, and thus an additional source for the excess SO2 is required. Infrared spectroscopic analyses of dissolved water and carbon dioxide in glass inclusions from quartz phenocrysts demonstrate that before eruption the magma contained a separate, SO2-bearing vapor phase. Data for gas emissions from other volcanoes in subduction-related arcs suggest that preeruptive magmatic vapor is a major source of the SO2 that is released during many volcanic eruptions.

  4. Mineral stimulation of subsurface microorganisms: release of limiting nutrients from silicates

    USGS Publications Warehouse

    Roger, Jennifer Roberts; Bennett, Philip C.

    2004-01-01

    Microorganisms play an important role in the weathering of silicate minerals in many subsurface environments, but an unanswered question is whether the mineral plays an important role in the microbial ecology. Silicate minerals often contain nutrients necessary for microbial growth, but whether the microbial community benefits from their release during weathering is unclear. In this study, we used field and laboratory approaches to investigate microbial interactions with minerals and glasses containing beneficial nutrients and metals. Field experiments from a petroleum-contaminated aquifer, where silicate weathering is substantially accelerated in the contaminated zone, revealed that phosphorus (P) and iron (Fe)-bearing silicate glasses were preferentially colonized and weathered, while glasses without these elements were typically barren of colonizing microorganisms, corroborating previous studies using feldspars. In laboratory studies, we investigated microbial weathering of silicates and the release of nutrients using a model ligand-promoted pathway. A metal-chelating organic ligand 3,4 dihydroxybenzoic acid (3,4 DHBA) was used as a source of chelated ferric iron, and a carbon source, to investigate mineral weathering rate and microbial metabolism.In the investigated aquifer, we hypothesize that microbes produce organic ligands to chelate metals, particularly Fe, for metabolic processes and also form stable complexes with Al and occasionally with Si. Further, the concentration of these ligands is apparently sufficient near an attached microorganism to destroy the silicate framework while releasing the nutrient of interest. In microcosms containing silicates and glasses with trace phosphate mineral inclusions, microbial biomass increased, indicating that the microbial community can use silicate-bound phosphate inclusions. The addition of a native microbial consortium to microcosms containing silicates or glasses with iron oxide inclusions correlated to accelerated weathering and release of Si into solution as well as the accelerated degradation of the model substrate 3,4 DHBA. We propose that silicate-bound P and Fe inclusions are bioavailable, and microorganisms may use organic ligands to dissolve the silicate matrix and access these otherwise limiting nutrients.

  5. Total mercury released to the environment by human activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streets, David G.; Horowitz, Hannah M.; Jacob, Daniel J.

    Here, we estimate that a cumulative total of 1.5 (1.0–2.8) Tg (teragrams, or million tonnes) of mercury (Hg) have been released by human activities up to 2010, 73% of which was released after 1850. Of this liberated Hg, 470 Gg (gigagrams, or thousand tonnes) was emitted directly into the air, and 74% of the air emissions were elemental Hg. Cumulatively, about 1.1 Tg were released to land and water bodies. Though annual releases of Hg have been relatively stable since 1880 at 8 ± 2 Gg, except for wartime, the distributions of those releases among source types, world regions, andmore » environmental media have changed dramatically. Production of Hg accounts for 27% of cumulative Hg releases to the environment, followed by silver mining (24%) and chemicals manufacturing (12%). North America (30%), Europe (27%), and Asia (16%) have experienced the largest releases. Biogeochemical modeling shows a 3.2-fold increase in the atmospheric burden relative to 1850 and a contemporary atmospheric reservoir of 4570 Mg, both of which agree well with observational constraints. We find that approximately 40% (390 Gg) of the Hg discarded to land and water must be sequestered at contaminated sites to maintain consistency with recent declines in atmospheric Hg concentrations.« less

  6. Release of plutonium isotopes into the environment from the Fukushima Daiichi Nuclear Power Plant accident: what is known and what needs to be known.

    PubMed

    Zheng, Jian; Tagami, Keiko; Uchida, Shigeo

    2013-09-03

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident has caused serious contamination in the environment. The release of Pu isotopes renewed considerable public concern because they present a large risk for internal radiation exposure. In this Critical Review, we summarize and analyze published studies related to the release of Pu from the FDNPP accident based on environmental sample analyses and the ORIGEN model simulations. Our analysis emphasizes the environmental distribution of released Pu isotopes, information on Pu isotopic composition for source identification of Pu releases in the FDNPP-damaged reactors or spent fuel pools, and estimation of the amounts of Pu isotopes released from the FDNPP accident. Our analysis indicates that a trace amount of Pu isotopes (∼2 × 10(-5)% of core inventory) was released into the environment from the damaged reactors but not from the spent fuel pools located in the reactor buildings. Regarding the possible Pu contamination in the marine environment, limited studies suggest that no extra Pu input from the FDNPP accident could be detected in the western North Pacific 30 km off the Fukushima coast. Finally, we identified knowledge gaps remained on the release of Pu into the environment and recommended issues for future studies.

  7. Total mercury released to the environment by human activities

    DOE PAGES

    Streets, David G.; Horowitz, Hannah M.; Jacob, Daniel J.; ...

    2017-04-27

    Here, we estimate that a cumulative total of 1.5 (1.0–2.8) Tg (teragrams, or million tonnes) of mercury (Hg) have been released by human activities up to 2010, 73% of which was released after 1850. Of this liberated Hg, 470 Gg (gigagrams, or thousand tonnes) was emitted directly into the air, and 74% of the air emissions were elemental Hg. Cumulatively, about 1.1 Tg were released to land and water bodies. Though annual releases of Hg have been relatively stable since 1880 at 8 ± 2 Gg, except for wartime, the distributions of those releases among source types, world regions, andmore » environmental media have changed dramatically. Production of Hg accounts for 27% of cumulative Hg releases to the environment, followed by silver mining (24%) and chemicals manufacturing (12%). North America (30%), Europe (27%), and Asia (16%) have experienced the largest releases. Biogeochemical modeling shows a 3.2-fold increase in the atmospheric burden relative to 1850 and a contemporary atmospheric reservoir of 4570 Mg, both of which agree well with observational constraints. We find that approximately 40% (390 Gg) of the Hg discarded to land and water must be sequestered at contaminated sites to maintain consistency with recent declines in atmospheric Hg concentrations.« less

  8. Evaluation of Unbound Engineered Nanoparticles from a Worker Exposure and Environmental Release Perspective

    NASA Astrophysics Data System (ADS)

    Bunker, K.; Casuccio, G.; Lersch, T.; Ogle, R.; Wahl, L.

    2009-12-01

    Nanotechnology and the use of unbound engineered nanoparticles (UNP) is a rapidly developing area of materials science. UNP are defined as engineered nanoparticles that are not contained within a matrix that would prevent the nanoparticles from being mobile and a potential source of exposure. At this time there are no regulatory environmental release limits or worker exposure limits for UNP. The Lawrence Berkeley National Laboratory (LBNL) has initiated a study to evaluate worker exposure and potential environmental release of UNP related to various research activities at LBNL. The study is being performed to help identify and manage potential health and safety hazards as well as environmental impacts related to UNP. A key component of the study is the characterization of starting (source) UNP materials to assist in the determination of worker exposure and environmental release. Analysis of the starting materials is being used to establish source signatures. The source signatures will then be used in the evaluation of worker exposure and environmental release. This presentation will provide an overview of the LBNL study with a focus on the methodologies being used to analyze the samples.

  9. Development of Accommodation Models for Soldiers in Vehicles: Squad

    DTIC Science & Technology

    2014-09-01

    existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments...Distribution Statement A. Approved for public release; distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Data from a previous study...body armor and body borne gear. 15. SUBJECT TERMS Anthropometry , Posture, Vehicle Occupants, Accommodation 16. SECURITY CLASSIFICATION OF

  10. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  11. Integrated modeling approach using SELECT and SWAT models to simulate source loading and in-stream conditions of fecal indicator bacteria.

    NASA Astrophysics Data System (ADS)

    Ranatunga, T.

    2016-12-01

    Modeling of fate and transport of fecal bacteria in a watershed is generally a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria (E.coli) source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads were input to the SWAT model in order to simulate the transport through the land and in-stream conditions. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on H-GAC's regional land use, population and household projections (up to 2040). Based on the in-stream reductions required to meet the water quality standards, the corresponding required source load reductions were estimated.

  12. Application of SELECT and SWAT models to simulate source load, fate, and transport of fecal bacteria in watersheds.

    NASA Astrophysics Data System (ADS)

    Ranatunga, T.

    2017-12-01

    Modeling of fate and transport of fecal bacteria in a watershed is a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads from SELECT model were input to the SWAT model, and simulate the bacteria transport through the land and in-stream. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on regional land use, population and household forecast (up to 2040). Based on the reductions required to meet the water quality standards in-stream, the corresponding required source load reductions were estimated.

  13. Quantifying methane emission from fugitive sources by combining tracer release and downwind measurements - a sensitivity analysis based on multiple field surveys.

    PubMed

    Mønster, Jacob G; Samuelsson, Jerker; Kjeldsen, Peter; Rella, Chris W; Scheutz, Charlotte

    2014-08-01

    Using a dual species methane/acetylene instrument based on cavity ring down spectroscopy (CRDS), the dynamic plume tracer dispersion method for quantifying the emission rate of methane was successfully tested in four measurement campaigns: (1) controlled methane and trace gas release with different trace gas configurations, (2) landfill with unknown emission source locations, (3) landfill with closely located emission sources, and (4) comparing with an Fourier transform infrared spectroscopy (FTIR) instrument using multiple trace gasses for source separation. The new real-time, high precision instrument can measure methane plumes more than 1.2 km away from small sources (about 5 kg h(-1)) in urban areas with a measurement frequency allowing plume crossing at normal driving speed. The method can be used for quantification of total methane emissions from diffuse area sources down to 1 kg per hour and can be used to quantify individual sources with the right choice of wind direction and road distance. The placement of the trace gas is important for obtaining correct quantification and uncertainty of up to 36% can be incurred when the trace gas is not co-located with the methane source. Measurements made at greater distances are less sensitive to errors in trace gas placement and model calculations showed an uncertainty of less than 5% in both urban and open-country for placing the trace gas 100 m from the source, when measurements were done more than 3 km away. Using the ratio of the integrated plume concentrations of tracer gas and methane gives the most reliable results for measurements at various distances to the source, compared to the ratio of the highest concentration in the plume, the direct concentration ratio and using a Gaussian plume model. Under suitable weather and road conditions, the CRDS system can quantify the emission from different sources located close to each other using only one kind of trace gas due to the high time resolution, while the FTIR system can measure multiple trace gasses but with a lower time resolution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A model for the release, dispersion and environmental impact of a postulated reactor accident from a submerged commercial nuclear power plant

    NASA Astrophysics Data System (ADS)

    Bertch, Timothy Creston

    1998-12-01

    Nuclear power plants are inherently suitable for submerged applications and could provide power to the shore power grid or support future underwater applications. The technology exists today and the construction of a submerged commercial nuclear power plant may become desirable. A submerged reactor is safer to humans because the infinite supply of water for heat removal, particulate retention in the water column, sedimentation to the ocean floor and inherent shielding of the aquatic environment would significantly mitigate the effects of a reactor accident. A better understanding of reactor operation in this new environment is required to quantify the radioecological impact and to determine the suitability of this concept. The impact of release to the environment from a severe reactor accident is a new aspect of the field of marine radioecology. Current efforts have been centered on radioecological impacts of nuclear waste disposal, nuclear weapons testing fallout and shore nuclear plant discharges. This dissertation examines the environmental impact of a severe reactor accident in a submerged commercial nuclear power plant, modeling a postulated site on the Atlantic continental shelf adjacent to the United States. This effort models the effects of geography, decay, particle transport/dispersion, bioaccumulation and elimination with associated dose commitment. The use of a source term equivalent to the release from Chernobyl allows comparison between the impacts of that accident and the postulated submerged commercial reactor plant accident. All input parameters are evaluated using sensitivity analysis. The effect of the release on marine biota is determined. Study of the pathways to humans from gaseous radionuclides, consumption of contaminated marine biota and direct exposure as contaminated water reaches the shoreline is conducted. The model developed by this effort predicts a significant mitigation of the radioecological impact of the reactor accident release with a submerged commercial nuclear power plant. The two box models predict the most of the radio-ecological impact occurs during the first eight days after release. The most significant risk to humans is from consumption of biota. The reduction in impact to humans from a large radioactive release makes the concept worthy of further study.

  15. Identification of unique release kinetics of serotonin from guinea-pig and human enterochromaffin cells

    PubMed Central

    Raghupathi, Ravinarayan; Duffield, Michael D; Zelkas, Leah; Meedeniya, Adrian; Brookes, Simon J H; Sia, Tiong Cheng; Wattchow, David A; Spencer, Nick J; Keating, Damien J

    2013-01-01

    The major source of serotonin (5-HT) in the body is the enterochromaffin (EC) cells lining the intestinal mucosa of the gastrointestinal tract. Despite the fact that EC cells synthesise ∼95% of total body 5-HT, and that this 5-HT has important paracrine and endocrine roles, no studies have investigated the mechanisms of 5-HT release from single primary EC cells. We have developed a rapid primary culture of guinea-pig and human EC cells, allowing analysis of single EC cell function using electrophysiology, electrochemistry, Ca2+ imaging, immunocytochemistry and 3D modelling. Ca2+ enters EC cells upon stimulation and triggers quantal 5-HT release via L-type Ca2+ channels. Real time amperometric techniques reveal that EC cells release 5-HT at rest and this release increases upon stimulation. Surprisingly for an endocrine cell storing 5-HT in large dense core vesicles (LDCVs), EC cells release 70 times less 5-HT per fusion event than catecholamine released from similarly sized LDCVs in endocrine chromaffin cells, and the vesicle release kinetics instead resembles that observed in mammalian synapses. Furthermore, we measured EC cell density along the gastrointestinal tract to create three-dimensional (3D) simulations of 5-HT diffusion using the minimal number of variables required to understand the physiological relevance of single cell 5-HT release in the whole-tissue milieu. These models indicate that local 5-HT levels are likely to be maintained around the activation threshold for mucosal 5-HT receptors and that this is dependent upon stimulation and location within the gastrointestinal tract. This is the first study demonstrating single cell 5-HT release in primary EC cells. The mode of 5-HT release may represent a unique mode of exocytosis amongst endocrine cells and is functionally relevant to gastrointestinal sensory and motor function. PMID:24099799

  16. A Dusty Coma Model of Comet Hyakutake

    NASA Astrophysics Data System (ADS)

    Boice, D. C.; Benkhoff, J.

    1996-09-01

    We present a multifluid, hydrodynamic model for the gas, dust, and plasma flow in a cometary coma appropriate for Comet Hyakutake. The model accounts for three sources of gas release: sublimation from surface ices, transport of gas from subsurface regions through the surface, and release of gas from dust in the coma. The simulations are based on a spherically symmetric neutral coma model with detailed photo and gas-phase chemistry and dust entrainment by the gas. The model includes a separate energy balance for the electrons, separate flow of the neutral gas, fast neutral atomic and molecular hydrogen, and dust entrainment with fragmentation. The simulations allow a study of how certain features of a cometary coma, e.g., spatial distributions of gas-phase species and dust of various sizes, change with heliocentric distance. Special attention is given to observations of hydrocarbon and sulphur species. In comparison with observations, the model can be used to characterize the environment surrounding Hyakutake and aid in assimilating a variety of diverse observations of this bright comet. A complete description of the model and more extensive results with comparisons to observations where possible will be presented.

  17. Aerosol-halogen interaction: Change of physico-chemical properties of SOA by naturally released halogen species

    NASA Astrophysics Data System (ADS)

    Ofner, J.; Balzer, N.; Buxmann, J.; Grothe, H.; Krüger, H.; Platt, U.; Schmitt-Kopplin, P.; Zetzsch, C.

    2011-12-01

    Reactive halogen species are released by various sources like photo-activated sea-salt aerosol or salt pans and salt lakes. These heterogeneous release mechanisms have been overlooked so far, although their potential of interaction with organic aerosols like Secondary Organic Aerosol (SOA), Biomass Burning Organic Aerosol (BBOA) or Atmospheric Humic LIke Substances (HULIS) is completely unknown. Such reactions can constitute sources of gaseous organo-halogen compounds or halogenated organic particles in the atmospheric boundary layer. To study the interaction of organic aerosols with reactive halogen species (RHS), SOA was produced from α-pinene, catechol and guaiacol using an aerosol smog-chamber. The model SOAs were characterized in detail using a variety of physico-chemical methods (Ofner et al., 2011). Those aerosols were exposed to molecular halogens in the presence of UV/VIS irradiation and to halogens, released from simulated natural halogen sources like salt pans, in order to study the complex aerosol-halogen interaction. The heterogeneous reaction of RHS with those model aerosols leads to different gaseous species like CO2, CO and small reactive/toxic molecules like phosgene (COCl2). Hydrogen containing groups on the aerosol particles are destroyed to form HCl or HBr, and a significant formation of C-Br bonds could be verified in the particle phase. Carbonyl containing functional groups of the aerosol are strongly affected by the halogenation process. While changes of functional groups and gaseous species were visible using FTIR spectroscopy, optical properties were studied using Diffuse Reflectance UV/VIS spectroscopy. Overall, the optical properties of the processed organic aerosols are significantly changed. While chlorine causes a "bleaching" of the aerosol particles, bromine shifts the maximum of UV/VIS absorption to the red end of the UV/VIS spectrum. Further physico-chemical changes were recognized according to the aerosol size-distributions or the averaged carbon oxidation state (OSc). The heterogeneous reaction of SOA with molecular halogens released from the simulated salt-pan at different simulated environmental conditions leads to changes of several physico-chemical features of the aerosol. However, the halogen release mechanisms are also affected by the presence of organic aerosols. One order of magnitude less BrO was detected by an active Differential Optical Absorption Spectroscopy (DOAS) instrument in the presence of SOA compared to experiments without SOA. This work was supported by the German Research Foundation within the HALOPROC project. Ofner, J., Krüger, H.-U., Grothe, H., Schmitt-Kopplin, P., Whitmore, K., and Zetzsch, C. (2011), Atmos. Chem. Phys., 11, 1-15.

  18. Triton: A hot potato

    NASA Technical Reports Server (NTRS)

    Kirk, R. L.; Brown, R. H.

    1991-01-01

    The effect of sunlight on the surface of Triton was studied. Widely disparate models of the active geysers observed during Voyager 2 flyby were proposed, with a solar energy source almost their only feature. Yet Triton derives more of its heat from internal sources (energy released by the radioactive decay) than any other icy satellite. The effect of this relatively large internal heat on the observable behavior of volatiles on Triton's surface is investigated. The following subject areas are covered: the Global Energy Budget; insulation polar caps; effect on frost stability; mantle convection; and cryovolcanism.

  19. Treating convection in sequential solvers

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Thakur, Siddharth

    1992-01-01

    The treatment of the convection terms in the sequential solver, a standard procedure found in virtually all pressure based algorithms, to compute the flow problems with sharp gradients and source terms is investigated. Both scalar model problems and one-dimensional gas dynamics equations have been used to study the various issues involved. Different approaches including the use of nonlinear filtering techniques and adoption of TVD type schemes have been investigated. Special treatments of the source terms such as pressure gradients and heat release have also been devised, yielding insight and improved accuracy of the numerical procedure adopted.

  20. Active chlorine and nitric oxide formation from chemical rocket plume afterburning

    NASA Astrophysics Data System (ADS)

    Leone, D. M.; Turns, S. R.

    Chlorine and oxides of nitrogen (NO(x)) released into the atmosphere contribute to acid rain (ground level or low-altitude sources) and ozone depletion from the stratosphere (high-altitude sources). Rocket engines have the potential for forming or activating these pollutants in the rocket plume. For instance, H2/O2 rockets can produce thermal NO(x) in their plumes. Emphasis, in the past, has been placed on determining the impact of chlorine release on the stratosphere. To date, very little, if any, information is available to understand what contribution NO(x) emissions from ground-based engine testing and actual rocket launches have on the atmosphere. The goal of this work is to estimate the afterburning emissions from chemical rocket plumes and determine their local stratospheric impact. Our study focuses on the space shuttle rocket motors, which include both the solid rocket boosters (SRB's) and the liquid propellant main engines (SSME's). Rocket plume afterburning is modeled employing a one-dimensional model incorporating two chemical kinetic systems: chemical and thermal equilibria with overlayed nitric oxide chemical kinetics (semi equilibrium) and full finite-rate chemical kinetics. Additionally, the local atmospheric impact immediately following a launch is modeled as the emissions diffuse and chemically react in the stratosphere.

  1. Active chlorine and nitric oxide formation from chemical rocket plume afterburning

    NASA Technical Reports Server (NTRS)

    Leone, D. M.; Turns, S. R.

    1994-01-01

    Chlorine and oxides of nitrogen (NO(x)) released into the atmosphere contribute to acid rain (ground level or low-altitude sources) and ozone depletion from the stratosphere (high-altitude sources). Rocket engines have the potential for forming or activating these pollutants in the rocket plume. For instance, H2/O2 rockets can produce thermal NO(x) in their plumes. Emphasis, in the past, has been placed on determining the impact of chlorine release on the stratosphere. To date, very little, if any, information is available to understand what contribution NO(x) emissions from ground-based engine testing and actual rocket launches have on the atmosphere. The goal of this work is to estimate the afterburning emissions from chemical rocket plumes and determine their local stratospheric impact. Our study focuses on the space shuttle rocket motors, which include both the solid rocket boosters (SRB's) and the liquid propellant main engines (SSME's). Rocket plume afterburning is modeled employing a one-dimensional model incorporating two chemical kinetic systems: chemical and thermal equilibria with overlayed nitric oxide chemical kinetics (semi equilibrium) and full finite-rate chemical kinetics. Additionally, the local atmospheric impact immediately following a launch is modeled as the emissions diffuse and chemically react in the stratosphere.

  2. EDMS - Microcomputer Pollution Model for civilian Airports and Air Force Bases: (User’s Guide),

    DTIC Science & Technology

    1991-06-01

    exchange. The United States Government assumes no liability for content or use thereof. The United States Government does not endorse products or...overwritten. As new issues of Mobile 4 are released, they will be incorporated into ElIS. The user shald check with the model issuer to determine what...Triangle Park, N.C.; June 1982 - May 1983 EPA 1985; Compilation of Air Pollutant Emission Factors - Volume II: Mobile Sources; Environental Protection

  3. A Computational Approach for Automated Posturing of a Human Finite Element Model

    DTIC Science & Technology

    2016-07-01

    Std. Z39.18 July 2016 Memorandum Report A Computational Approach for Automated Posturing of a Human Finite Element Model Justin McKee and Adam...protection by influencing the path that loading will be transferred into the body and is a major source of variability. The development of a finite element ...posture, human body, finite element , leg, spine 42 Adam Sokolow 410-306-2985Unclassified Unclassified Unclassified UU ii Approved for public release

  4. Rfam: Wikipedia, clans and the “decimal” release

    PubMed Central

    Gardner, Paul P.; Daub, Jennifer; Tate, John; Moore, Benjamin L.; Osuch, Isabelle H.; Griffiths-Jones, Sam; Finn, Robert D.; Nawrocki, Eric P.; Kolbe, Diana L.; Eddy, Sean R.; Bateman, Alex

    2011-01-01

    The Rfam database aims to catalogue non-coding RNAs through the use of sequence alignments and statistical profile models known as covariance models. In this contribution, we discuss the pros and cons of using the online encyclopedia, Wikipedia, as a source of community-derived annotation. We discuss the addition of groupings of related RNA families into clans and new developments to the website. Rfam is available on the Web at http://rfam.sanger.ac.uk. PMID:21062808

  5. Chemistry in the Dusty Coma of Comet Hale-Bopp

    NASA Astrophysics Data System (ADS)

    Boice, D. C.; Cochran, A. L.; Disanti, M. A.; Huebner, W. F.

    1998-09-01

    Recent progress on a multifluid, hydrodynamic model is presented for the dusty gas flow in the inner coma of comet Hale-Bopp at several heliocentric distances. The simulations are based on a 1-D neutral coma model with detailed photo and gas-phase chemistry and dust entrainment by the gas, a separate energy balance for the electrons, separate flow of the neutral gas, fast neutral atomic and molecular hydrogen, and dust entrainment with fragmentation. The model accounts for three sources of gas release: sublimation from surface ices, transport of gas from subsurface regions through the surface, and release of gas from dust in the coma. This permits a consistent study of the importance and strength of each possible source for a variety of gas-phase species. The simulations allow a study of the changes with heliocentric distance of features within a cometary coma, e.g., spatial distributions of gas-phase species and dust of various sizes and the velocity and temperature profiles. In particular, the model is used to probe spatial distributions of gas-phase species (e.g., CN, CH, C_3, C_2, HCN, HNC, CO) and dust, and the velocity and temperature structure to understand the complex gas-phase chemistry that occurs in the inner coma. Comparisons with observations are made where available to characterize the environment surrounding comet Hale-Bopp and to aid in assimilating a variety of diverse observations of this unique comet.

  6. MESOI Version 2. 0: an interactive mesoscale Lagrangian puff dispersion model with deposition and decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsdell, J.V.; Athey, G.F.; Glantz, C.S.

    1983-11-01

    MESOI Version 2.0 is an interactive Lagrangian puff model for estimating the transport, diffusion, deposition and decay of effluents released to the atmosphere. The model is capable of treating simultaneous releases from as many as four release points, which may be elevated or at ground-level. The puffs are advected by a horizontal wind field that is defined in three dimensions. The wind field may be adjusted for expected topographic effects. The concentration distribution within the puffs is initially assumed to be Gaussian in the horizontal and vertical. However, the vertical concentration distribution is modified by assuming reflection at the groundmore » and the top of the atmospheric mixing layer. Material is deposited on the surface using a source depletion, dry deposition model and a washout coefficient model. The model also treats the decay of a primary effluent species and the ingrowth and decay of a single daughter species using a first order decay process. This report is divided into two parts. The first part discusses the theoretical and mathematical bases upon which MESOI Version 2.0 is based. The second part contains the MESOI computer code. The programs were written in the ANSI standard FORTRAN 77 and were developed on a VAX 11/780 computer. 43 references, 14 figures, 13 tables.« less

  7. Assessing the joint impact of DNAPL source-zone behavior and degradation products on the probabilistic characterization of human health risk

    NASA Astrophysics Data System (ADS)

    Henri, Christopher V.; Fernàndez-Garcia, Daniel; de Barros, Felipe P. J.

    2016-02-01

    The release of industrial contaminants into the subsurface has led to a rapid degradation of groundwater resources. Contamination caused by Dense Non-Aqueous Phase Liquids (DNAPLs) is particularly severe owing to their limited solubility, slow dissolution and in many cases high toxicity. A greater insight into how the DNAPL source zone behavior and the contaminant release towards the aquifer impact human health risk is crucial for an appropriate risk management. Risk analysis is further complicated by the uncertainty in aquifer properties and contaminant conditions. This study focuses on the impact of the DNAPL release mode on the human health risk propagation along the aquifer under uncertain conditions. Contaminant concentrations released from the source zone are described using a screening approach with a set of parameters representing several scenarios of DNAPL architecture. The uncertainty in the hydraulic properties is systematically accounted for by high-resolution Monte Carlo simulations. We simulate the release and the transport of the chlorinated solvent perchloroethylene and its carcinogenic degradation products in randomly heterogeneous porous media. The human health risk posed by the chemical mixture of these contaminants is characterized by the low-order statistics and the probability density function of common risk metrics. We show that the zone of high risk (hot spot) is independent of the DNAPL mass release mode, and that the risk amplitude is mostly controlled by heterogeneities and by the source zone architecture. The risk is lower and less uncertain when the source zone is formed mostly by ganglia than by pools. We also illustrate how the source zone efficiency (intensity of the water flux crossing the source zone) affects the risk posed by an exposure to the chemical mixture. Results display that high source zone efficiencies are counter-intuitively beneficial, decreasing the risk because of a reduction in the time available for the production of the highly toxic subspecies.

  8. Accident Source Terms for Pressurized Water Reactors with High-Burnup Cores Calculated using MELCOR 1.8.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.

    2016-12-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less

  9. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  10. Flare Energy Release: Internal Conflict, Contradiction with High Resolution Observations, Possible Solutions

    NASA Astrophysics Data System (ADS)

    Pustilnik, L.

    2017-06-01

    All accepted paradigm of solar and stellar flares energy release based on 2 whales: 1. Source of energy is free energy of non-potential force free magnetic field in atmosphere above active region; 2. Process of ultrafast dissipation of magnetic fields is Reconnection in Thin Turbulent Current Sheet (RTTCS). Progress in observational techniques in last years provided ultra-high spatial resolution and in physics of turbulent plasma showed that real situation is much more complicated and standard approach is in contradiction both with observations and with problem of RTTCS stability. We present critical analysis of classic models of pre-flare energy accumulation and its dissipation during flare energy release from pioneer works Giovanelli (1939, 1947) up to topological reconnection. We show that all accepted description of global force-free fields as source of future flare cannot be agreed with discovered in last years fine and ultra-fine current-magnetic structure included numerouse arcs-threads with diameters up to 100 km with constant sequence from photosphere to corona. This magnetic skeleton of thin current magnetic threads with strong interaction between them is main source of reserved magnetic energy insolar atmosphere. Its dynamics will be controlled by percolation of magnetic stresses through network of current-magnetic threads with transition to flare state caused by critical value of global current. We show that thin turbulent current sheet is absolutely unstable configuration both caused by splitting to numerous linear currents by dissipative modes like to tearing, and as sequence of suppress of plasma turbulence caused by anomalous heating of turbulent plasma. In result of these factors primary RTTCS will be disrupted in numerous turbulent and normal plasma domains like to resistors network. Current propagation through this network will have percolation character with all accompanied properties of percolated systems: self-organization with formation power spectrum of distribution of flares and micro-flares, and possibility of phase transition to flare energy release with huge increasing of energy release.

  11. Molecular dynamics simulations of intergranular fracture in UO2 with nine empirical interatomic potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yongfeng Zhang; Paul C Millett; Michael R Tonks

    The intergranular fracture behavior of UO2 was studied using molecular dynamics simulations with a bicrystal model. The anisotropic fracture behavior due to the different grain boundary characters was investigated with the View the MathML source symmetrical tilt S5 and the View the MathML source symmetrical tilt S3 ({1 1 1} twin) grain boundaries. Nine interatomic potentials, seven rigid-ion plus two core–shell ones, were utilized to elucidate possible potential dependence. Initiating from a notch, crack propagation along grain boundaries was observed for most potentials. The S3 boundary was found to be more prone to fracture than the S5 one, indicated bymore » a lower energy release rate associated with the former. However, some potential dependence was identified on the existence of transient plastic deformation at crack tips, and the results were discussed regarding the relevant material properties including the excess energies of metastable phases and the critical energy release rate for intergranular fracture. In general, local plasticity at crack tips was observed in fracture simulations with potentials that predict low excess energies for metastable phases and high critical energy release rates for intergranular fracture.« less

  12. Improving volcanic ash predictions with the HYSPLIT dispersion model by assimilating MODIS satellite retrievals

    NASA Astrophysics Data System (ADS)

    Chai, Tianfeng; Crawford, Alice; Stunder, Barbara; Pavolonis, Michael J.; Draxler, Roland; Stein, Ariel

    2017-02-01

    Currently, the National Oceanic and Atmospheric Administration (NOAA) National Weather Service (NWS) runs the HYSPLIT dispersion model with a unit mass release rate to predict the transport and dispersion of volcanic ash. The model predictions provide information for the Volcanic Ash Advisory Centers (VAAC) to issue advisories to meteorological watch offices, area control centers, flight information centers, and others. This research aims to provide quantitative forecasts of ash distributions generated by objectively and optimally estimating the volcanic ash source strengths, vertical distribution, and temporal variations using an observation-modeling inversion technique. In this top-down approach, a cost functional is defined to quantify the differences between the model predictions and the satellite measurements of column-integrated ash concentrations weighted by the model and observation uncertainties. Minimizing this cost functional by adjusting the sources provides the volcanic ash emission estimates. As an example, MODIS (Moderate Resolution Imaging Spectroradiometer) satellite retrievals of the 2008 Kasatochi volcanic ash clouds are used to test the HYSPLIT volcanic ash inverse system. Because the satellite retrievals include the ash cloud top height but not the bottom height, there are different model diagnostic choices for comparing the model results with the observed mass loadings. Three options are presented and tested. Although the emission estimates vary significantly with different options, the subsequent model predictions with the different release estimates all show decent skill when evaluated against the unassimilated satellite observations at later times. Among the three options, integrating over three model layers yields slightly better results than integrating from the surface up to the observed volcanic ash cloud top or using a single model layer. Inverse tests also show that including the ash-free region to constrain the model is not beneficial for the current case. In addition, extra constraints on the source terms can be given by explicitly enforcing no-ash for the atmosphere columns above or below the observed ash cloud top height. However, in this case such extra constraints are not helpful for the inverse modeling. It is also found that simultaneously assimilating observations at different times produces better hindcasts than only assimilating the most recent observations.

  13. Effect of different polyphenol sources on the efficiency of ellagic acid release by Aspergillus niger.

    PubMed

    Sepúlveda, Leonardo; de la Cruz, Reynaldo; Buenrostro, José Juan; Ascacio-Valdés, Juan Alberto; Aguilera-Carbó, Antonio Francisco; Prado, Arely; Rodríguez-Herrera, Raúl; Aguilar, Cristóbal Noé

    2016-01-01

    Fungal hydrolysis of ellagitannins produces hexahydroxydiphenic acid, which is considered an intermediate molecule in ellagic acid release. Ellagic acid has important and desirable beneficial health properties. The aim of this work was to identify the effect of different sources of ellagitannins on the efficiency of ellagic acid release by Aspergillus niger. Three strains of A. niger (GH1, PSH and HT4) were assessed for ellagic acid release from different polyphenol sources: cranberry, creosote bush, and pomegranate used as substrate. Polyurethane foam was used as support for solid-state culture in column reactors. Ellagitannase activity was measured for each of the treatments. Ellagic acid was quantified by high performance liquid chromatography. When pomegranate polyphenols were used, a maximum value of ellagic acid (350.21 mg/g) was reached with A. niger HT4 in solid-state culture. The highest amount of ellagitannase (5176.81 U/l) was obtained at 8h of culture when cranberry polyphenols and strain A. niger PSH were used. Results demonstrated the effect of different polyphenol sources and A. niger strains on ellagic acid release. It was observed that the best source for releasing ellagic acid was pomegranate polyphenols and A. niger HT4 strain, which has the ability to degrade these compounds for obtaining a potent bioactive molecule such as ellagic acid. Copyright © 2015 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Queries over Unstructured Data: Probabilistic Methods to the Rescue

    NASA Astrophysics Data System (ADS)

    Sarawagi, Sunita

    Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.

  15. Case study of dust event sources from the Gobi and Taklamakan deserts: An investigation of the horizontal evolution and topographical effect using numerical modeling and remote sensing.

    PubMed

    Fan, Jin; Yue, Xiaoying; Sun, Qinghua; Wang, Shigong

    2017-06-01

    A severe dust event occurred from April 23 to April 27, 2014, in East Asia. A state-of-the-art online atmospheric chemistry model, WRF/Chem, was combined with a dust model, GOCART, to better understand the entire process of this event. The natural color images and aerosol optical depth (AOD) over the dust source region are derived from datasets of moderate resolution imaging spectroradiometer (MODIS) loaded on a NASA Aqua satellite to trace the dust variation and to verify the model results. Several meteorological conditions, such as pressure, temperature, wind vectors and relative humidity, are used to analyze meteorological dynamic. The results suggest that the dust emission occurred only on April 23 and 24, although this event lasted for 5days. The Gobi Desert was the main source for this event, and the Taklamakan Desert played no important role. This study also suggested that the landform of the source region could remarkably interfere with a dust event. The Tarim Basin has a topographical effect as a "dust reservoir" and can store unsettled dust, which can be released again as a second source, making a dust event longer and heavier. Copyright © 2016. Published by Elsevier B.V.

  16. Marine structure derived calcium phosphate-polymer biocomposites for local antibiotic delivery.

    PubMed

    Macha, Innocent J; Cazalbou, Sophie; Ben-Nissan, Besim; Harvey, Kate L; Milthorpe, Bruce

    2015-01-20

    Hydrothermally converted coralline hydroxyapatite (HAp) particles loaded with medically active substances were used to develop polylactic acid (PLA) thin film composites for slow drug delivery systems. The effects of HAp particles within PLA matrix on the gentamicin (GM) release and release kinetics were studied. The gentamicin release kinetics seemed to follow Power law Korsmeyer Peppas model with mainly diffusional process with a number of different drug transport mechanisms. Statistical analysis shows very significant difference on the release of gentamicin between GM containing PLA (PLAGM) and GM containing HAp microspheres within PLA matrix (PLAHApGM) devices, which PLAHApGM displays lower release rates. The use of HAp particles improved drug stabilization and higher drug encapsulation efficiency of the carrier. HAp is also the source of Ca2+ for the regeneration and repair of diseased bone tissue. The release profiles, exhibited a steady state release rate with significant antimicrobial activity against Staphylococcus aureus (S. aureus) (SH1000) even at high concentration of bacteria. The devices also indicated significant ability to control the growth of bacterial even after four weeks of drug release. Clinical release profiles can be easily tuned from drug-HAp physicochemical interactions and degradation kinetics of polymer matrix. The developed systems could be applied to prevent microbial adhesion to medical implant surfaces and to treat infections mainly caused by S. aureus in surgery.

  17. Marine Structure Derived Calcium Phosphate–Polymer Biocomposites for Local Antibiotic Delivery

    PubMed Central

    Macha, Innocent J.; Cazalbou, Sophie; Ben-Nissan, Besim; Harvey, Kate L.; Milthorpe, Bruce

    2015-01-01

    Hydrothermally converted coralline hydroxyapatite (HAp) particles loaded with medically active substances were used to develop polylactic acid (PLA) thin film composites for slow drug delivery systems. The effects of HAp particles within PLA matrix on the gentamicin (GM) release and release kinetics were studied. The gentamicin release kinetics seemed to follow Power law Korsmeyer Peppas model with mainly diffusional process with a number of different drug transport mechanisms. Statistical analysis shows very significant difference on the release of gentamicin between GM containing PLA (PLAGM) and GM containing HAp microspheres within PLA matrix (PLAHApGM) devices, which PLAHApGM displays lower release rates. The use of HAp particles improved drug stabilization and higher drug encapsulation efficiency of the carrier. HAp is also the source of Ca2+ for the regeneration and repair of diseased bone tissue. The release profiles, exhibited a steady state release rate with significant antimicrobial activity against Staphylococcus aureus (S. aureus) (SH1000) even at high concentration of bacteria. The devices also indicated significant ability to control the growth of bacterial even after four weeks of drug release. Clinical release profiles can be easily tuned from drug-HAp physicochemical interactions and degradation kinetics of polymer matrix. The developed systems could be applied to prevent microbial adhesion to medical implant surfaces and to treat infections mainly caused by S. aureus in surgery. PMID:25608725

  18. Masking release by combined spatial and masker-fluctuation effects in the open sound field.

    PubMed

    Middlebrooks, John C

    2017-12-01

    In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated. Comodulation masking release (CMR) was largely independent of signal location at a higher masker sound level, but at a lower level CMR was reduced for the lateral signal location. In Experiment 2, a brief signal was positioned at the envelope maximum (peak) or minimum (dip) of a 50-Hz-wide on-signal masker. Masking was released in dip more than in peak conditions only for the 90° signal. Overall, open-field SRM was greater in magnitude than binaural masking release reported in comparable closed-field studies, and envelope-related release was somewhat weaker. Mutual enhancement of masking release by spatial and envelope-related effects tended to increase with increasing masker level.

  19. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  20. Quantitative risk assessment of CO2 transport by pipelines--a review of uncertainties and their impacts.

    PubMed

    Koornneef, Joris; Spruijt, Mark; Molag, Menso; Ramírez, Andrea; Turkenburg, Wim; Faaij, André

    2010-05-15

    A systematic assessment, based on an extensive literature review, of the impact of gaps and uncertainties on the results of quantitative risk assessments (QRAs) for CO(2) pipelines is presented. Sources of uncertainties that have been assessed are: failure rates, pipeline pressure, temperature, section length, diameter, orifice size, type and direction of release, meteorological conditions, jet diameter, vapour mass fraction in the release and the dose-effect relationship for CO(2). A sensitivity analysis with these parameters is performed using release, dispersion and impact models. The results show that the knowledge gaps and uncertainties have a large effect on the accuracy of the assessed risks of CO(2) pipelines. In this study it is found that the individual risk contour can vary between 0 and 204 m from the pipeline depending on assumptions made. In existing studies this range is found to be between <1m and 7.2 km. Mitigating the relevant risks is part of current practice, making them controllable. It is concluded that QRA for CO(2) pipelines can be improved by validation of release and dispersion models for high-pressure CO(2) releases, definition and adoption of a universal dose-effect relationship and development of a good practice guide for QRAs for CO(2) pipelines. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  1. Secondary effects of anion exchange on chloride, sulfate, and lead release: systems approach to corrosion control.

    PubMed

    Willison, Hillary; Boyer, Treavor H

    2012-05-01

    Water treatment processes can cause secondary changes in water chemistry that alter finished water quality including chloride, sulfate, natural organic matter (NOM), and metal release. Hence, the goal of this research was to provide an improved understanding of the chloride-to-sulfate mass ratio (CSMR) with regards to chloride and sulfate variations at full-scale water treatment plants and corrosion potential under simulated premise plumbing conditions. Laboratory corrosion studies were conducted using Pb-Sn solder/Cu tubing galvanic cells exposed to model waters with low (approx. 5 mg/L Cl(-) and 10 mg/L SO(4)(2-)) and high (approx. 50 mg/L Cl(-) and 100 mg/L SO(4)(2-)) concentrations of chloride and sulfate at a constant CSMR of ≈ 0.5. The role of NOM during corrosion was also evaluated by changing the type of organic material. In addition, full-scale sampling was conducted to quantify the raw water variability of chloride, sulfate, and NOM concentrations and the changes to these parameters from magnetic ion exchange treatment. Test conditions with higher concentrations of chloride and sulfate released significantly more lead than the lower chloride and sulfate test waters. In addition, the source of NOM was a key factor in the amount of lead released with the model organic compounds yielding significantly less lead release than aquatic NOM. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luna, R. E.

    This paper provides a simple model for estimating the release of respirable aerosols resulting from an attack on a spent fuel cask using a high energy density device (HEDD). Two primary experiments have provided data on potential releases from spent fuel casks under HEDD attack. Sandia National Laboratories (SNL) conducted the first in the early 1980's and the second was sponsored by Gessellshaft fur Anlagen- and Reaktorsicherheit (GRS) in Germany and conducted in France in 1994. Both used surrogate spent fuel assemblies in real casks. The SNL experiments used un-pressurized fuel pin assemblies in a single element cask while themore » GRS tests used pressurized fuel pin assemblies in a 9-element cask. Data from the two test programs is reasonably consistent, given the differences in the experiments, but the use of the test data for prediction of releases resulting from HEDD attack requires a method for accounting for the effects of pin pressurization release and the ratio of pin plenum gas release to cask free volume (VR). To account for the effects of VR and to link the two data sources, a simple model has been developed that uses both the SNL data and the GRS data as well as recent test data on aerosols produced in experiments with single pellets subjected to HEDD effects conducted under the aegis of the International Consortium's Working Group on Sabotage of Transport and Storage Casks (WGSTSC). (authors)« less

  3. Impact of increasing antarctic glacial freshwater release on regional sea-ice cover in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Merino, Nacho; Jourdain, Nicolas C.; Le Sommer, Julien; Goosse, Hugues; Mathiot, Pierre; Durand, Gael

    2018-01-01

    The sensitivity of Antarctic sea-ice to increasing glacial freshwater release into the Southern Ocean is studied in a series of 31-year ocean/sea-ice/iceberg model simulations. Glaciological estimates of ice-shelf melting and iceberg calving are used to better constrain the spatial distribution and magnitude of freshwater forcing around Antarctica. Two scenarios of glacial freshwater forcing have been designed to account for a decadal perturbation in glacial freshwater release to the Southern Ocean. For the first time, this perturbation explicitly takes into consideration the spatial distribution of changes in the volume of Antarctic ice shelves, which is found to be a key component of changes in freshwater release. In addition, glacial freshwater-induced changes in sea ice are compared to typical changes induced by the decadal evolution of atmospheric states. Our results show that, in general, the increase in glacial freshwater release increases Antarctic sea ice extent. But the response is opposite in some regions like the coastal Amundsen Sea, implying that distinct physical mechanisms are involved in the response. We also show that changes in freshwater forcing may induce large changes in sea-ice thickness, explaining about one half of the total change due to the combination of atmospheric and freshwater changes. The regional contrasts in our results suggest a need for improving the representation of freshwater sources and their evolution in climate models.

  4. Integration of Dust Prediction Systems and Vegetation Phenology to Track Pollen for Asthma Alerts in Public Health

    NASA Technical Reports Server (NTRS)

    Luvall, Jeffrey C.; Sprigg, W. A.; Huete, A.; Nickovic, S.; Pejanovic, G.; Levetin, E.; Van de water, P.; Myers, O.; Budge, A. M.; Krapfl, H.; hide

    2011-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus pollen, a significant aeroallergen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Yin 2007) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and quantities of dust (Yin 2007). The use of satellite data products for studying phenology is well documented (White and Nemani 2006). We are modifying the DREAM model to incorporate pollen transport. The linkages already exist with DREAM through PHAiRS (Public Health Applications in remote Sensing) to the public health community. This linkage has the potential to fill this data gap so that health effects of pollen can better be tracked for linkage with health outcome data including asthma, respiratory effects, myocardial infarction, and lost work days. DREAM is based on the SKIRON/Eta modeling system and the Eta/NCEP regional atmospheric model. The dust modules of the entire system incorporate the state of the art parameterizations of all the major phases of the atmospheric dust life such as production, diffusion, advection, and removal. These modules also include effects of the particle size distribution on aerosol dispersion. The dust production mechanism is based on the viscous/turbulent mixing, shear-free convection diffusion, and soil moisture. In addition to these sophisticated mechanisms, very high resolution databases, including elevation, soil properties, and vegetation cover are utilized. The DREAM model was modified to use pollen sources instead of dust (PREAM). Pollen release will be estimated based on satellite-derived phenology of Juniperus spp. communities. The MODIS surface reflectance product (MOD09) will provide information on the start of the plant growing season, growth stage, peak greenness, dry-down and pollen release. Ground based observational records of pollen release timing and quantities will be used as verification. Techniques developed using MOD09 surface reflectance products will be directly applicable to the next generation sensors such as VIIRS. The resulting deterministic model for predicting and simulating pollen emission and downwind concentration to study details of phenology and meteorology and their dependencies. This information will be used to support the Centers for Disease Control and Prevention (CDC)'s National Environmental Public Health Tracking Program (EPHT) and the State of New Mexico environmental public health decision support for asthma and allergies alerts

  5. TYBO/BENHAM: Model Analysis of Groundwater Flow and Radionuclide Migration from Underground Nuclear Tests in Southwestern Pahute Mesa, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrew Wolfsberg; Lee Glascoe; Guoping Lu

    Recent field studies have led to the discovery of trace quantities of plutonium originating from the BENHAM underground nuclear test in two groundwater observation wells on Pahute Mesa at the Nevada Test Site. These observation wells are located 1.3 km from the BENHAM underground nuclear test and approximately 300 m from the TYBO underground nuclear test. In addition to plutonium, several other conservative (e.g. tritium) and reactive (e.g. cesium) radionuclides were found in both observation wells. The highest radionuclide concentrations were found in a well sampling a welded tuff aquifer more than 500m above the BENHAM emplacement depth. These measurementsmore » have prompted additional investigations to ascertain the mechanisms, processes, and conditions affecting subsurface radionuclide transport in Pahute Mesa groundwater. This report describes an integrated modeling approach used to simulate groundwater flow, radionuclide source release, and radionuclide transport near the BENHAM and TYBO underground nuclear tests on Pahute Mesa. The components of the model include a flow model at a scale large enough to encompass many wells for calibration, a source-term model capable of predicting radionuclide releases to aquifers following complex processes associated with nonisothermal flow and glass dissolution, and site-scale transport models that consider migration of solutes and colloids in fractured volcanic rock. Although multiple modeling components contribute to the methodology presented in this report, they are coupled and yield results consistent with laboratory and field observations. Additionally, sensitivity analyses are conducted to provide insight into the relative importance of uncertainty ranges in the transport parameters.« less

  6. Source-term development for a contaminant plume for use by multimedia risk assessment models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less

  7. Thermally coupled moving boundary model for charge-discharge of LiFePO4/C cells

    NASA Astrophysics Data System (ADS)

    Khandelwal, Ashish; Hariharan, Krishnan S.; Gambhire, Priya; Kolake, Subramanya Mayya; Yeo, Taejung; Doo, Seokgwang

    2015-04-01

    Optimal thermal management is a key requirement in commercial utilization of lithium ion battery comprising of phase change electrodes. In order to facilitate design of battery packs, thermal management systems and fast charging profiles, a thermally coupled electrochemical model that takes into account the phase change phenomenon is required. In the present work, an electrochemical thermal model is proposed which includes the biphasic nature of phase change electrodes, such as lithium iron phosphate (LFP), via a generalized moving boundary model. The contribution of phase change to the heat released during the cell operation is modeled using an equivalent enthalpy approach. The heat released due to phase transformation is analyzed in comparison with other sources of heat such as reversible, irreversible and ohmic. Detailed study of the thermal behavior of the individual cell components with changing ambient temperature, rate of operation and heat transfer coefficient is carried out. Analysis of heat generation in the various regimes is used to develop cell design and operating guidelines. Further, different charging protocols are analyzed and a model based methodology is suggested to design an efficient quick charging protocol.

  8. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  10. The JCMT Plane Survey: first complete data release - emission maps and compact source catalogue

    NASA Astrophysics Data System (ADS)

    Eden, D. J.; Moore, T. J. T.; Plume, R.; Urquhart, J. S.; Thompson, M. A.; Parsons, H.; Dempsey, J. T.; Rigby, A. J.; Morgan, L. K.; Thomas, H. S.; Berry, D.; Buckle, J.; Brunt, C. M.; Butner, H. M.; Carretero, D.; Chrysostomou, A.; Currie, M. J.; deVilliers, H. M.; Fich, M.; Gibb, A. G.; Hoare, M. G.; Jenness, T.; Manser, G.; Mottram, J. C.; Natario, C.; Olguin, F.; Peretto, N.; Pestalozzi, M.; Polychroni, D.; Redman, R. O.; Salji, C.; Summers, L. J.; Tahani, K.; Traficante, A.; diFrancesco, J.; Evans, A.; Fuller, G. A.; Johnstone, D.; Joncas, G.; Longmore, S. N.; Martin, P. G.; Richer, J. S.; Weferling, B.; White, G. J.; Zhu, M.

    2017-08-01

    We present the first data release of the James Clerk Maxwell Telescope Plane Survey (JPS), the JPS Public Release 1. JPS is an 850-μm continuum survey of six fields in the northern inner Galactic plane in a longitude range of ℓ = 7°-63°, made with the Submillimetre Common-User Bolometer Array 2. This first data release consists of emission maps of the six JPS regions with an average pixel-to-pixel noise of 7.19 mJy beam-1, when smoothed over the beam, and a compact source catalogue containing 7813 sources. The 95 per cent completeness limits of the catalogue are estimated at 0.04 Jy beam-1 and 0.3 Jy for the peak and integrated flux densities, respectively. The emission contained in the compact source catalogue is 42 ± 5 per cent of the total and, apart from the large-scale (greater than 8 arcmin) emission, there is excellent correspondence with features in the 500-μm Herschel maps. We find that, with two-dimensional matching, 98 ± 2 per cent of sources within the fields centred at ℓ = 20°, 30°, 40° and 50° are associated with molecular clouds, with 91 ± 3 per cent of the ℓ = 30° and 40° sources associated with dense molecular clumps. Matching the JPS catalogue to Herschel 70-μm sources, we find that 38 ± 1 per cent of sources show evidence of ongoing star formation. The JPS Public Release 1 images and catalogue will be a valuable resource for studies of star formation in the Galaxy and the role of environment and spiral arms in the star formation process.

  11. Nitrifying aerobic granular sludge fermentation for releases of carbon source and phosphorus: The role of fermentation pH.

    PubMed

    Zou, Jinte; Pan, Jiyang; He, Hangtian; Wu, Shuyun; Xiao, Naidong; Ni, Yongjiong; Li, Jun

    2018-07-01

    The effect of fermentation pH (uncontrolled, 4 and 10) on the releases of carbon source and phosphorus from nitrifying aerobic granular sludge (N-AGS) was investigated. Meanwhile, metal ion concentration and microbial community characterization were explored during N-AGS fermentation. The results indicated that N-AGS fermentation at pH 10 significantly promoted the releases of soluble chemical oxygen demand (SCOD) and total volatile fatty acids (TVFAs). However, SCOD and TVFA released from N-AGS were inhibited at pH 4. Moreover, acidic condition promoted phosphorus release (mainly apatite) from N-AGS during anaerobic fermentation. Nevertheless, alkaline condition failed to increase phosphorus concentration due to the formation of chemical-phosphate precipitates. Compared with the previously reported flocculent sludge fermentation, N-AGS fermentation released more SCOD and TVFAs, possibly due to the greater extracellular polymeric substances content and some hydrolytic-acidogenic bacteria in N-AGS. Therefore, N-AGS alkaline fermentation facilitated the carbon source recovery, while N-AGS acidic fermentation benefited the phosphorus recovery. Copyright © 2018. Published by Elsevier Ltd.

  12. Atmospheric chemistry of hydrogen fluoride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Meng -Dawn

    In this study, the atmospheric chemistry, emissions, and surface boundary layer transport of hydrogen fluoride (HF) is summarized. Although HF is known to be chemically reactive and highly soluble, both factors affect transport and removal in the atmosphere, we suggest that the chemistry can be ignored when the HF concentration is at a sufficiently low level (e.g., 10 ppmv). At a low concentration, the capability for HF to react in the atmosphere is diminished and therefore the species can be mathematically treated as inert during the transport. At a sufficiently high concentration of HF (e.g., kg/s release rate and thousandsmore » of ppm), however, HF can go through a series of rigorous chemical reactions including polymerization, depolymerization, and reaction with water to form molecular complex. As such, the HF species cannot be considered as inert because the reactions could intimately influence the plume s thermodynamic properties affecting the changes in plume temperature and density. The atmospheric residence time of HF was found to be less than four (4) days, and deposition (i.e., atmosphere to surface transport) is the dominant mechanism that controls the removal of HF and its oligomers from the atmosphere. The literature data on HF dry deposition velocity was relatively high compared to many commonly found atmospheric species such as ozone, sulfur dioxide, nitrogen oxides, etc. The global average of wet deposition velocity of HF was found to be zero based on one literature source. Uptake of HF by rain drops is limited by the acidity of the rain drops, and atmospheric particulate matter contributes negligibly to HF uptake. Finally, given that the reactivity of HF at a high release rate and elevated mole concentration cannot be ignored, it is important to incorporate the reaction chemistry in the near-field dispersion close to the proximity of the release source, and to incorporate the deposition mechanism in the far-field dispersion away from the release source. In other words, a hybrid computational scheme may be needed to address transport and atmospheric chemistry of HF in a range of applications. The model uncertainty will be limited by the precision of boundary layer parameterization and ability to accurately model the atmospheric turbulence.« less

  13. Atmospheric chemistry of hydrogen fluoride

    DOE PAGES

    Cheng, Meng -Dawn

    2017-04-11

    In this study, the atmospheric chemistry, emissions, and surface boundary layer transport of hydrogen fluoride (HF) is summarized. Although HF is known to be chemically reactive and highly soluble, both factors affect transport and removal in the atmosphere, we suggest that the chemistry can be ignored when the HF concentration is at a sufficiently low level (e.g., 10 ppmv). At a low concentration, the capability for HF to react in the atmosphere is diminished and therefore the species can be mathematically treated as inert during the transport. At a sufficiently high concentration of HF (e.g., kg/s release rate and thousandsmore » of ppm), however, HF can go through a series of rigorous chemical reactions including polymerization, depolymerization, and reaction with water to form molecular complex. As such, the HF species cannot be considered as inert because the reactions could intimately influence the plume s thermodynamic properties affecting the changes in plume temperature and density. The atmospheric residence time of HF was found to be less than four (4) days, and deposition (i.e., atmosphere to surface transport) is the dominant mechanism that controls the removal of HF and its oligomers from the atmosphere. The literature data on HF dry deposition velocity was relatively high compared to many commonly found atmospheric species such as ozone, sulfur dioxide, nitrogen oxides, etc. The global average of wet deposition velocity of HF was found to be zero based on one literature source. Uptake of HF by rain drops is limited by the acidity of the rain drops, and atmospheric particulate matter contributes negligibly to HF uptake. Finally, given that the reactivity of HF at a high release rate and elevated mole concentration cannot be ignored, it is important to incorporate the reaction chemistry in the near-field dispersion close to the proximity of the release source, and to incorporate the deposition mechanism in the far-field dispersion away from the release source. In other words, a hybrid computational scheme may be needed to address transport and atmospheric chemistry of HF in a range of applications. The model uncertainty will be limited by the precision of boundary layer parameterization and ability to accurately model the atmospheric turbulence.« less

  14. BLT-EC (Breach, Leach and Transport-Equilibrium Chemistry) data input guide. A computer model for simulating release and coupled geochemical transport of contaminants from a subsurface disposal facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacKinnon, R.J.; Sullivan, T.M.; Kinsey, R.R.

    1997-05-01

    The BLT-EC computer code has been developed, implemented, and tested. BLT-EC is a two-dimensional finite element computer code capable of simulating the time-dependent release and reactive transport of aqueous phase species in a subsurface soil system. BLT-EC contains models to simulate the processes (container degradation, waste-form performance, transport, chemical reactions, and radioactive production and decay) most relevant to estimating the release and transport of contaminants from a subsurface disposal system. Water flow is provided through tabular input or auxiliary files. Container degradation considers localized failure due to pitting corrosion and general failure due to uniform surface degradation processes. Waste-form performancemore » considers release to be limited by one of four mechanisms: rinse with partitioning, diffusion, uniform surface degradation, and solubility. Transport considers the processes of advection, dispersion, diffusion, chemical reaction, radioactive production and decay, and sources (waste form releases). Chemical reactions accounted for include complexation, sorption, dissolution-precipitation, oxidation-reduction, and ion exchange. Radioactive production and decay in the waste form is simulated. To improve the usefulness of BLT-EC, a pre-processor, ECIN, which assists in the creation of chemistry input files, and a post-processor, BLTPLOT, which provides a visual display of the data have been developed. BLT-EC also includes an extensive database of thermodynamic data that is also accessible to ECIN. This document reviews the models implemented in BLT-EC and serves as a guide to creating input files and applying BLT-EC.« less

  15. Resistance to genetic insect control: Modelling the effects of space.

    PubMed

    Watkinson-Powell, Benjamin; Alphey, Nina

    2017-01-21

    Genetic insect control, such as self-limiting RIDL 2 (Release of Insects Carrying a Dominant Lethal) technology, is a development of the sterile insect technique which is proposed to suppress wild populations of a number of major agricultural and public health insect pests. This is achieved by mass rearing and releasing male insects that are homozygous for a repressible dominant lethal genetic construct, which causes death in progeny when inherited. The released genetically engineered ('GE') insects compete for mates with wild individuals, resulting in population suppression. A previous study modelled the evolution of a hypothetical resistance to the lethal construct using a frequency-dependent population genetic and population dynamic approach. This found that proliferation of resistance is possible but can be diluted by the introgression of susceptible alleles from the released homozygous-susceptible GE males. We develop this approach within a spatial context by modelling the spread of a lethal construct and resistance trait, and the effect on population control, in a two deme metapopulation, with GE release in one deme. Results show that spatial effects can drive an increased or decreased evolution of resistance in both the target and non-target demes, depending on the effectiveness and associated costs of the resistant trait, and on the rate of dispersal. A recurrent theme is the potential for the non-target deme to act as a source of resistant or susceptible alleles for the target deme through dispersal. This can in turn have a major impact on the effectiveness of insect population control. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. The 2016 Kaikōura Earthquake Revealed by Kinematic Source Inversion and Seismic Wavefield Simulations: Slow Rupture Propagation on a Geometrically Complex Crustal Fault Network

    NASA Astrophysics Data System (ADS)

    Holden, C.; Kaneko, Y.; D'Anastasio, E.; Benites, R.; Fry, B.; Hamling, I. J.

    2017-11-01

    The 2016 Kaikōura (New Zealand) earthquake generated large ground motions and resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution using two kinematic modeling techniques based on analysis of local strong-motion and high-rate GPS data. Our kinematic models capture a complex pattern of slowly (Vr < 2 km/s) propagating rupture from south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 s after the origin time. Both models indicate rupture reactivation on the Kekerengu fault with the time separation of 11 s between the start of the original failure and start of the subsequent one. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.

  17. Action Recommendation for Cyber Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhury, Sutanay; Rodriguez, Luke R.; Curtis, Darren S.

    2015-09-01

    This paper presents an unifying graph-based model for representing the infrastructure, behavior and missions of an enterprise. We describe how the model can be used to achieve resiliency against a wide class of failures and attacks. We introduce an algorithm for recommending resilience establishing actions based on dynamic updates to the models. Without loss of generality, we show the effectiveness of the algorithm for preserving latency based quality of service (QoS). Our models and the recommendation algorithms are implemented in a software framework that we seek to release as an open source framework for simulating resilient cyber systems.

  18. Simulation Technology Laboratory Building 970 hazards assessment document

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, C.L.; Starr, M.D.

    1994-11-01

    The Department of Energy Order 5500.3A requires facility-specific hazards assessments be prepared, maintained, and used for emergency planning purposes. This hazards assessment document describes the chemical and radiological hazards associated with the Simulation Technology Laboratory, Building 970. The entire inventory was screened according to the potential airborne impact to onsite and offsite individuals. The air dispersion model, ALOHA, estimated pollutant concentrations downwind from the source of a release, taking into consideration the toxicological and physical characteristics of the release site, the atmospheric conditions, and the circumstances of the release. The greatest distances at which a postulated facility event will producemore » consequences exceeding the ERPG-2 and Early Severe Health Effects thresholds are 78 and 46 meters, respectively. The highest emergency classification is a Site Area Emergency. The Emergency Planning Zone is 100 meters.« less

  19. Application of a source apportionment model in consideration of volatile organic compounds in an urban stream

    USGS Publications Warehouse

    Asher, W.E.; Luo, W.; Campo, K.W.; Bender, D.A.; Robinson, K.W.; Zogorski, J.S.; Pankow, J.F.

    2007-01-01

    Position-dependent concentrations of trichloroethylene and methyl-tert-butyl ether are considered for a 2.81-km section of the Aberjona River in Massachusetts, USA. This river flows through Woburn and Winchester (Massachusetts, USA), an area that is highly urbanized, has a long history of industrial activities dating to the early 1800s, and has gained national attention because of contamination from chlorinated solvent compounds in Woburn wells G and H. The river study section is in Winchester and begins approximately five stream kilometers downstream from the Woburn wells superfund site. Approximately 300 toxic release sites are documented in the watershed upstream from the terminus of the study section. The inflow to the river study section is considered one source of contamination. Other sources are the atmosphere, a tributary flow, and groundwater flows entering the river; the latter are categorized according to stream zone (1, 2, 3, etc.). Loss processes considered include outflows to groundwater and water-to-atmosphere transfer of volatile compounds. For both trichloroethylene and methyl-rerf-butyl ether, degradation is neglected over the timescale of interest. Source apportionment fractions with assigned values ??inflow, ??1, ??2, ??3, etc. are tracked by a source apportionment model. The strengths of the groundwater and tributary sources serve as fitting parameters when minimizing a reduced least squares statistic between water concentrations measured during a synoptic study in July 2001 versus predictions from the model. The model fits provide strong evidence of substantial unknown groundwater sources of trichloroethylene and methyl-tert-butyl ether amounting to tens of grams per day of trichloroethylene and methyl-tert-butyl ether in the river along the study section. Modeling in a source apportionment manner can be useful to water quality managers allocating limited resources for remediation and source control. ?? 2007 SETAC.

  20. Nickel release from surgical instruments and operating room equipment.

    PubMed

    Boyd, Anne H; Hylwa, Sara A

    2018-04-15

    Background There has been no systematic study assessing nickel release from surgical instruments and equipment used within the operating suite. This equipment represents important potential sources of exposure for nickel-sensitive patients and hospital staff. To investigate nickel release from commonly used surgical instruments and operating room equipment. Using the dimethylglyoxime nickel spot test, a variety of surgical instruments and operating room equipment were tested for nickel release at our institution. Of the 128 surgical instruments tested, only 1 was positive for nickel release. Of the 43 operating room items tested, 19 were positive for nickel release, 7 of which have the potential for direct contact with patients and/or hospital staff. Hospital systems should be aware of surgical instruments and operating room equipment as potential sources of nickel exposure.

  1. AN INVENTORY OF SOURCES AND ENVIRONMENTAL RELEASES OF DIOXIN-LIKE COMPOUNDS IN THE U.S. FOR THE YEARS 1987, 1995, AND 2000 (Final, Nov 2006)

    EPA Science Inventory

    An Inventory of Sources and Environmental Releases of Dioxin-Like Compounds in the United States for the Years 1987, 1995, and 2000 (EPA/600/P-03/002F, November 2006), is a peer reviewed report representing EPA’s assessment of dioxin sources and their emissions to the environment...

  2. Modeling Ionization Events iduced by Protein Protein Binding

    NASA Astrophysics Data System (ADS)

    Mitra, Rooplekha; Shyam, Radhey; Alexov, Emil

    2009-11-01

    The association of two or more biological macromolecules dramatically change the environment of the amino acids situated at binding interface and could change ionization states of titratable groups. The change of ionization due to the binding results in proton uptake/release and causes pH-dependence of the binding free energy. We apply computational method, as implemented in Multi Conformation Continuum Electrostatics (MCCE) algorithm, to study protonation evens on a large set of protein-protein complexes. Our results indicate that proton uptake/release is a common phenomena in protein binding since in vast majority of the cases (70%) the binding caused at least 0.5 units proton change. The proton uptake/release was further investigated with respect to interfacial area and charges of the monomers and it was found that macroscopic characteristics are not important determinants. Instead, charge complementarity across the interface and the number of unpaired ionizable groups at the interface are the primary source of proton uptake/release.

  3. Cooling of the Earth in the Archaean: Consequences of pressure-release melting in a hotter mantle

    NASA Astrophysics Data System (ADS)

    Vlaar, N. J.; van Keken, P. E.; van den Berg, A. P.

    1994-01-01

    A model is presented to describe the cooling of the Earth in the Archaean. At the higher Archaean mantle temperatures pressure-release melting starts deeper and generates a thicker basaltic or komatiitic crust and depleted harzburgite layer compared with the present-day situation. Intrinsic compositional stability and lack of mechanical coherency renders the mechanism of plate tectonics ineffective. It is proposed that the Archaean continents stabilised early on top of a compositionally stratified root. In the Archaean oceanic lithosphere, hydrated upper crust can founder and recycle through its high-pressure phase eclogite. Eclogite remelting and new pressure-release melting generates new crustal material. Migration of magma and latent heat release by solidification at the surface provides an efficient mechanism to cool the mantle by several hundreds of degrees during the Archaean. This can satisfactorily explain the occurrence of high extrusion temperature komatiites and lower extrusion temperature basalts in greenstone belts as being derived from the same source by different mechanisms.

  4. How old is streamwater? Open questions in catchment transit time conceptualization, modeling and analysis

    Treesearch

    J.J. McDonnell; K. McGuire; P. Aggarwal; K.J. Beven; D. Biondi; G. Destouni; S. Dunn; A. James; J. Kirchner; P. Kraft; S. Lyon; P. Maloszewski; B. Newman; L. Pfister; A. Rinaldo; A. Rodhe; T. Sayama; J. Seibert; K. Solomon; C. Soulsby; M. Stewart; D. Tetzlaff; C. Tobin; P. Troch; M. Weiler; A. Western; A. Wörman; S. Wrede

    2010-01-01

    The time water spends travelling subsurface through a catchment to the stream network (i.e. the catchment water transit time) fundamentally describes the storage, flow pathway heterogeneity and sources of water in a catchment. The distribution of transit times reflects how catchments retain and release water and solutes that in turn set biogeochemical conditions and...

  5. VizieR Online Data Catalog: Parameters and IR excesses of Gaia DR1 stars (McDonald+, 2017)

    NASA Astrophysics Data System (ADS)

    McDonald, I.; Zijlstra, A. A.; Watson, R. A.

    2017-08-01

    Spectral energy distribution fits are presented for stars from the Tycho-Gaia Astrometric Solution (TGAS) from Gaia Data Release 1. Hipparcos-Gaia stars are presented in a separate table. Effective temperatures, bolometric luminosities, and infrared excesses are presented (alongside other parameters pertinent to the model fits), plus the source photometry used. (3 data files).

  6. Forecasting Future Sea Ice Conditions: A Lagrangian Approach

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Forecasting Future Sea Ice Conditions: A Lagrangian ...GCMs participating in IPCC AR5 agree with observed source region patterns from the satellite- derived dataset. 4- Compare Lagrangian ice... Lagrangian sea-ice back trajectories to estimate thermodynamic and dynamic (advection) ice loss. APPROACH We use a Lagrangian trajectory model to

  7. A preliminary investigation of the environmental impact of a thermal power plant in relation to PCB contamination.

    PubMed

    Gedik, Kadir; Imamoglu, Ipek

    2011-07-01

    The most significant application of polychlorinated biphenyls (PCBs) is in transformers and capacitors. Therefore, power plants are important suspected sources for entry of PCBs into the environment. In this context, the levels and distribution of PCBs in sediment, soil, ash, and sludge samples were investigated around Seyitömer thermal power plant, Kütahya, Turkey. Moreover, identity and contribution of PCB mixtures were predicted using the chemical mass balance (CMB) receptor model. United States Environmental Protection Agency methods were applied during sample preparation, extraction (3540C), cleanup (3660B, 3665A, 3630C), and analysis (8082A). ΣPCB concentrations in the region ranged from not detected to 385 ng/g dry weight, with relatively higher contamination in sediments in comparison to soil, sludge, and ash samples collected from around the power plant. Congener profiles of the sediment and soil samples show penta-, hexa-, and hepta-chlorobiphenyls as the major homolog groups. The results from the CMB model indicate that PCB contamination is largely due to Clophen A60/A40 and Aroclor 1254/1254(late)/1260 release into the sediment and sludge samples around the thermal power plant. Since there are no other sources of PCBs in the region and the identity of PCB sources estimated by the CMB model mirrors PCB mixtures contained in transformers formerly used in the plant, the environmental contamination observed especially in sediments is attributed to the power plant. Release of PCBs over time, as indicated by the significant concentrations observed even in surface samples, emphasizes the importance of the need for better environmental management.

  8. Planck intermediate results. VII. Statistical properties of infrared and radio extragalactic sources from the Planck Early Release Compact Source Catalogue at frequencies between 100 and 857 GHz

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2013-02-01

    We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a large range of flux densities. We derive the multi-frequency Euclidean level - the plateau in the normalised differential counts at high flux-density - and compare it to WMAP, Spitzer and IRAS results. The submillimetre number counts are not well reproduced by current evolution models of dusty galaxies, whereas the millimetre part appears reasonably well fitted by the most recent model for synchrotron-dominated sources. Finally we provide estimates of the local luminosity density of dusty galaxies, providing the first such measurements at 545 and 857 GHz. Appendices are available in electronic form at http://www.aanda.orgCorresponding author: herve.dole@ias.u-psud.fr

  9. 32 CFR 286.25 - Judicial actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... INFORMATION ACT PROGRAM DOD FREEDOM OF INFORMATION ACT PROGRAM REGULATION Release and Processing Procedures...-United States government source information. A requester may bring suit in a U.S. District Court to compel the release of records obtained from a non-government source or records based on information...

  10. [Effects of carbon sources, temperature and electron acceptors on biological phosphorus removal].

    PubMed

    Han, Yun; Xu, Song; Dong, Tao; Wang, Bin-Fan; Wang, Xian-Yao; Peng, Dang-Cong

    2015-02-01

    Effects of carbon sources, temperature and electron acceptors on phosphorus uptake and release were investigated in a pilot-scale oxidation ditch. Phosphorus uptake and release rates were measured with different carbon sources (domestic sewage, sodium acetate, glucose) at 25 degrees C. The results showed that the minimum phosphorus uptake and release rates of glucose were 5.12 mg x (g x h)(-1) and 6.43 mg x (g x h)(-1), respectively, and those of domestic sewage are similar to those of sodium acetate. Phosphorus uptake and release rates increased with the increase of temperature (12, 16, 20 and 25 degrees C) using sodium acetate as carbon sources. Anoxic phosphorus uptake rate decreased with added COD. Electron acceptors (oxygen, nitrate, nitrite) had significant effects on phosphorus uptake rate and their order was in accordance with oxygen > nitrate > nitrite. The mass ratio of anoxic P uptake and N consumption (P(uptake)/N (consumption)) of nitrate and nitrite were 0.96 and 0.65, respectively.

  11. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  12. Near-field transport of {sup 129}I from a point source in an in-room disposal vault

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolar, M.; Leneveu, D.M.; Johnson, L.H.

    1995-12-31

    A very small number of disposal containers of heat generating nuclear waste may have initial manufacturing defects that would lead to pin-hole type failures at the time of or shortly after emplacement. For sufficiently long-lived containers, only the initial defects need to be considered in modeling of release rates from the disposal vault. Two approaches to modeling of near-field mass transport from a single point source within a disposal room have been compared: the finite-element code MOTIF (A Model Of Transport In Fractured/porous media) and a boundary integral method (BIM). These two approaches were found to give identical results formore » a simplified model of the disposal room without groundwater flow. MOTIF has then been used to study the effects of groundwater flow on the mass transport out of the emplacement room.« less

  13. Dynamic SPARROW Modeling of Nitrogen Flux with Climate and MODIS Vegetation Indices as Drivers

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Brakebill, J.; Schwarz, G.; Alexander, R. B.; Hirsch, R. M.; Nolin, A. W.; Macauley, M.; Zhang, Q.; Shih, J.; Wang, W.; Sproles, E.

    2011-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models are statistically calibrated and describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe a dynamically calibrated SPARROW model of total nitrogen flux in the Potomac River Basin based on seasonal water quality and watershed input data for 80 monitoring stations over the period 2000 to 2008. One challenge in dynamic modeling of reactive nitrogen is obtaining frequently-reported, spatially-detailed input data on the phenology of agricultural production and terrestrial vegetation. In this NASA-funded research, we use the Enhanced Vegetation Index (EVI) and gross primary productivity data from the Terra Satellite-borne MODIS sensor to parameterize seasonal uptake and release of nitrogen. The spatial reference frame of the model is a 16,000-reach, 1:100,000-scale stream network, and the computational time step is seasonal. Precipitation and temperature data are from PRISM. The model formulation allows for separate storage compartments for nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Removal of nitrogen from watershed storage to stream channels and to "permanent" sinks (deep groundwater and the atmosphere) occur as parallel first-order processes. We use the model to explore an important issue in nutrient management in the Potomac and other basins: the long-term response of total nitrogen flux to changing climate. We model the nitrogen flux response to projected seasonal and inter-annual changes in temperature and precipitation, but under current seasonal nitrogen inputs, as indicated by MODIS measures of productivity. Under these constant inter-annual inputs, changing temperature and precipitation is predicted to lead to flux changes as temporary basin stores of nitrogen either grow or shrink due to changing relative rates of nitrogen removal to the atmosphere and release to streams.

  14. Prediction and characterization of heat-affected zone formation due to neighboring nickel-aluminum multilayer foil reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, David P.; Hirschfeld, Deidre A.; Hooper, Ryan J.

    2015-09-01

    Reactive multilayer foils have the potential to be used as local high intensity heat sources for a variety of applications. Much of the past research effort concerning these materials have focused on understanding the structure-property relationships of the foils that govern the energy released during a reaction. To enhance the ability of researchers to more rapidly develop technologies based on reactive multilayer foils, a deeper and more predictive understanding of the relationship between the heat released from the foil and microstructural evolution in the neighboring materials is needed. This work describes the development of a numerical model for the purposemore » of evaluating new foil-substrate combinations for screening and optimization. The model is experimentally validated using a commercially available Ni-Al multilayer foils and different alloys.« less

  15. UTM: Universal Transit Modeller

    NASA Astrophysics Data System (ADS)

    Deeg, Hans J.

    2014-12-01

    The Universal Transit Modeller (UTM) is a light-curve simulator for all kinds of transiting or eclipsing configurations between arbitrary numbers of several types of objects, which may be stars, planets, planetary moons, and planetary rings. A separate fitting program, UFIT (Universal Fitter) is part of the UTM distribution and may be used to derive best fits to light-curves for any set of continuously variable parameters. UTM/UFIT is written in IDL code and its source is released in the public domain under the GNU General Public License.

  16. SCIPUFF Tangent-Linear/Adjoint Model for Release Source Location from Observational Data

    DTIC Science & Technology

    2011-01-18

    magnitudes, times? Inverse model based on SCIPUFF (AIMS) 1/17/2011 Aerodyne Research, Inc. W911NF-06-C-0161 5 (1981), Daescu and Carmichael (2003...Defined Choice- Situations” The Journal of the Operational Research Society, Vol. 32, No. 2 (1981) Daescu, D. N., and Carmichael , G. R., “An Adjoint...Intelligence Applications to Environmental Sciences at AMS Annual Meeting, Atlanta, GA, Jan 17-21 (2010 ) N. Platt, S. Warner and S. M. Nunes, ’Plan for

  17. Multimedia Environmental Distribution of Nanomaterials

    NASA Astrophysics Data System (ADS)

    Liu, Haoyang Haven

    Engineered nanomaterials (ENMs), which may be released to the environment due to human-related activities, can move across environmental phase boundaries and be found in most media. Given the rapid development and growing applications of nanotechnology, there is concern and thus the need to assess the potential environmental impact associated with ENMs. Accordingly, a modeling platform was developed to enable evaluation of the dynamic multimedia environmental distribution of ENMs (MendNano) and the range of potential exposure concentrations of ENMs. The MendNano was based on a dynamic multimedia compartmental modeling approach that was guided by detailed analysis of the agglomeration of ENMs, life-cycle analysis based estimates of their potential release to the environment, and incorporation of mechanistic sub-models of various intermedia transport processes. Model simulations for various environmental scenarios indicated that ENM accumulation in the sediment increased significantly with increased ENMs attachment to suspended solids in water. Atmospheric dry and wet depositions can be important pathways for ENMs input to the terrestrial environment in the absence of direct and distributed ENM release to soil. Increased ENM concentration in water due to atmospheric deposition (wet and dry) is expected as direct ENM release to water diminishes. However, for soluble ENMs dissolution can be the dominant pathway for suspended ENM removal from water even compared to advective transport. For example, simulations for Los Angeles showed that dry deposition, rain scavenging, and wind dilution can remove 90% of ENMs from the atmospheric airshed in ~100-230 days, ~2-6 hrs, and ~0.5-2 days, respectively. For the evaluated ENMs (metal, metal oxides, carbon nanotubes (CNT), nanoclays), mass accumulation in the multimedia environment was mostly in the soil and sediment. Additionally, simulation results for TiO2 in Los Angeles demonstrates that the ENM concentrations in air and water increases rapidly to reach steady state, in 72 hrs and 8 days after the start of source release, respectively. After termination of source release, ENM concentrations would decrease by 90% in ~1 and ~4 days. In contrast, steady state for ENM concentrations in soil would not be expected to occur until after about 10 years. MendNano was further integrated with a sub-model of lifecycle environmental assessment for the release of ENMs (LearNano). Estimation of the releases of various ENMs and their environmental distributions in various regions in the U.S. and countries throughout the world revealed that the exposure concentrations for most ENMs (e.g., metal, metal oxides and carbon nanotubes) are expected to be in the range of 0.0003 - 30 ng m-3 (air), 0.006 - 150 ng L-1 (water), 0.01 - 40 mug kg-1 (soil), and 0.005 - 100 mg kg-1 (sediment). It is important to note that the environmental transport of ENMs is governed by particulate transport processes; and thus the transport rates of ENMs are dependent on their particle size distribution. Accordingly, a computational constant-number Direct Simulation Monte Carlo (DSMC) model was also developed to assess the ENM agglomeration in aqueous systems, by solving the Smoluchowski coagulation equation coupled with particle-particle interaction energies provided by the classical Derjaguin-Landau-Verwey-Overbeek (DLVO) theory and non-DLVO hydration repulsion interaction energy. Prediction of ENM agglomerate PSDs demonstrated excellent agreement with experimental measurements for TiO 2, CeO2, alpha-Fe2O3, SiO2, and C60 ENMs over a wide range suspension conditions. Simulations also demonstrated, in quantitative agreement with DLS measurements, that nanoparticle agglomerate size increased both with ionic strength (IS) and as the solution pH approached the isoelectric point (IEP). Additionally, evaluation of experimental DLS measurements for TiO2, CeO2, SiO2, and alpha-Fe 2O3 (hematite) at high IS (up to 900 mM) or low |zeta-potential| (>=1.35 mV) revealed that non-DLVO hydration repulsion energy can be above electrostatic repulsion energy such that the increased overall repulsion energy (contributed by hydration repulsion energy) can significantly lower the agglomerate diameter relative to the classical DLVO prediction. The classical DLVO theory, which is reasonably applicable for agglomeration of NPs of high |zeta-potential| (~>35 mV) in suspensions of low IS (~1 mM) or low |zeta-potential| (~< 40 mV) conditions. In summary, the MendNano-LearNano integrated modeling platform was implemented as a web-based software application that enables rapid "what-if?" scenario analysis, in order to assess the response of environmental system to various scenarios of ENM releases, investigate the impact of geographical and meteorological parameters on ENM distribution in the environment, compare the impact of ENM production and potential releases on different regions, as well as estimate source release rates based on monitored ENM concentrations. It is envisioned that the present integrated modeling platform can serve as a decision support tool to rapidly and critically assess the potential environmental implications of ENMs and thus ensure that nanotechnology is developed in a productive and environmentally responsible manner.

  18. Severe Nuclear Accident Program (SNAP) - a real time model for accidental releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saltbones, J.; Foss, A.; Bartnicki, J.

    1996-12-31

    The model: Several Nuclear Accident Program (SNAP) has been developed at the Norwegian Meteorological Institute (DNMI) in Oslo to provide decision makers and Government officials with real-time tool for simulating large accidental releases of radioactivity from nuclear power plants or other sources. SNAP is developed in the Lagrangian framework in which atmospheric transport of radioactive pollutants is simulated by emitting a large number of particles from the source. The main advantage of the Lagrangian approach is a possibility of precise parameterization of advection processes, especially close to the source. SNAP can be used to predict the transport and deposition ofmore » a radioactive cloud in e future (up to 48 hours, in the present version) or to analyze the behavior of the cloud in the past. It is also possible to run the model in the mixed mode (partly analysis and partly forecast). In the routine run we assume unit (1 g s{sup -1}) emission in each of three classes. This assumption is very convenient for the main user of the model output in case of emergency: Norwegian Radiation Protection Agency. Due to linearity of the model equations, user can test different emission scenarios as a post processing task by assigning different weights to concentration and deposition fields corresponding to each of three emission classes. SNAP is fully operational and can be run by the meteorologist on duty at any time. The output from SNAP has two forms: First on the maps of Europe, or selected parts of Europe, individual particles are shown during the simulation period. Second, immediately after the simulation, concentration/deposition fields can be shown every three hours of the simulation period as isoline maps for each emission class. In addition, concentration and deposition maps, as well as some meteorological data, are stored on a public accessible disk for further processing by the model users.« less

  19. Vesicular and non-vesicular glutamate release in the nucleus accumbens in conditions of a forced change of behavioral strategy.

    PubMed

    Saul'skaya, N B; Mikhailova, M O

    2005-09-01

    Studies on Sprague-Dawley rats used intracerebral dialysis and high-performance liquid chromatography to identify sources of glutamate release into the intercellular space of the nucleus accumbens during forced correction of food-related behavior, i.e., on presentation to the feeding rat of a conditioned signal previously combined with a pain stimulus or on replacement of a food reinforcement with an inedible food substitute. The results showed that glutamate release observed in the nucleus accumbens during these tests can be prevented by tetrodotoxin (1 microM), which blocks exocytosis, but not by (S)-4-carboxyphenylglycine (5 microM), which blocks non-vesicular glutamate release. Conversely, administration of (S)-4-carboxyphenylglycine halved baseline glutamate release, while administration of tetrodotoxin had no effect on this process. These data provide evidence that different mechanisms control glutamate release into the intercellular space of this nucleus in baseline conditions and in conditions of evoked correction of feeding behavior: the source of baseline glutamate release is non-vesicular glutamate release, while glutamate release seen during forced correction of feeding behavior results from increases in synaptic release.

  20. Noradrenaline increases the expression and release of Hsp72 by human neutrophils.

    PubMed

    Giraldo, E; Multhoff, G; Ortega, E

    2010-05-01

    The blood concentration of extracellular 72kDa heat shock protein (eHsp72) increases under conditions of stress, including intense exercise. However, the signal(s), source(s), and secretory pathways in its release into the bloodstream have yet to be clarified. The aim of the present study was to evaluate the role of noradrenaline (NA) as a stress signal on the expression and release of Hsp72 by circulating neutrophils (as a source), all within a context of the immunophysiological regulation during exercise-induced stress in sedentary and healthy young (21-26years) women. The expression of Hsp72 on the surface of isolated neutrophils was determined by flow cytometry, and its release by cultured isolated neutrophils was determined by ELISA. Incubation with cmHsp70-FITC showed that neutrophils express Hsp72 on their surface under basal conditions. In addition, cultured isolated neutrophils (37 degrees C and 5% CO(2)) also released Hsp72 under basal conditions, with this release increasing from 10min to 24h in the absence of cell damage. NA at 10(-9)-10(-5)M doubled the percentage of neutrophils expressing Hsp72 after 60min and 24h incubation. NA also stimulated (by about 20%) the release of Hsp72 after 10min of incubation. (1) Hsp72 is expressed on the surface of isolated neutrophils under basal conditions, and this expression is augmented by NA. (2) Isolated neutrophils can also release Hsp72 under cultured basal conditions in the absence of cell death, and NA can increase this release. These results may contribute to confirming the hypothesis that NA can act as a "stress signal" for the increased eHsp72 in the context of exercise stress, with a role for neutrophils as a source for the expression and, to a lesser degree, the release of Hsp72 after activation by NA. Copyright 2010 Elsevier Inc. All rights reserved.

  1. 15 CFR 740.7 - Computers (APP).

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...

  2. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2004-01-01

    The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  3. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Prasad, A.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Budge, A. M.; hide

    2013-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and concentrations of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen emission is based on MODIS-derived phenology of Juniperus spp. communities. Ground-based observational records of pollen release timing and quantities will be used as model verification. This information will be used to support the Centers for Disease Control and Prevention s National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts

  4. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A.; Nickovic, S.; Prasad, A.; Pejanovic, G. A.; Vukovic, A.; VandeWater, P. K.; Budge, A. M.; hide

    2012-01-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and concentrations of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen emission is based on MODIS-derived phenology of Juniperus spp. communities. Ground-based observational records of pollen release timing and quantities will be used as model verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  5. Use of MODIS Satellite Data to Evaluate Juniperus spp. Pollen Phenology to Support a Pollen Dispersal Model, PREAM, to Support Public Health Allergy Alerts

    NASA Astrophysics Data System (ADS)

    Luvall, J. C.; Sprigg, W. A.; Levetin, E.; Huete, A. R.; Nickovic, S.; Prasad, A. K.; Pejanovic, G.; Vukovic, A.; Van De Water, P. K.; Budge, A.; Hudspeth, W. B.; Krapfl, H.; Toth, B.; Zelicoff, A.; Myers, O.; Bunderson, L.; Ponce-Campos, G.; Menache, M.; Crimmins, T. M.; Vujadinovic, M.

    2012-12-01

    Pollen can be transported great distances. Van de Water et. al., 2003 reported Juniperus spp. pollen was transported 200-600 km. Hence local observations of plant phenology may not be consistent with the timing and source of pollen collected by pollen sampling instruments. The DREAM (Dust REgional Atmospheric Model, Nickovic et al. 2001) is a verified model for atmospheric dust transport modeling using MODIS data products to identify source regions and concentrations of dust. We are modifying the DREAM model to incorporate pollen transport. Pollen emission is based on MODIS-derived phenology of Juniperus spp. communities. Ground-based observational records of pollen release timing and quantities will be used as model verification. This information will be used to support the Centers for Disease Control and Prevention's National Environmental Public Health Tracking Program and the State of New Mexico environmental public health decision support for asthma and allergies alerts.

  6. Influence of lag effect, soil release, and climate change on watershed anthropogenic nitrogen inputs and riverine export dynamics.

    PubMed

    Chen, Dingjiang; Huang, Hong; Hu, Minpeng; Dahlgren, Randy A

    2014-05-20

    This study demonstrates the importance of the nitrogen-leaching lag effect, soil nitrogen release, and climate change on anthropogenic N inputs (NANI) and riverine total nitrogen (TN) export dynamics using a 30-yr record for the Yongan River watershed in eastern China. Cross-correlation analysis indicated a 7-yr, 5-yr, and 4-yr lag time in riverine TN export in response to changes in NANI, temperature, and drained agricultural land area, respectively. Enhanced by warmer temperature and improved agricultural drainage, the upper 20 cm of agricultural soils released 270 kg N ha(-1) between 1980 and 2009. Climate change also increased the fractional export of NANI to river. An empirical model (R(2) = 0.96) for annual riverine TN flux incorporating these influencing factors estimated 35%, 41%, and 24% of riverine TN flux originated from the soil N pool, NANI, and background N sources, respectively. The model forecasted an increase of 45%, 25%, and 6% and a decrease of 13% in riverine TN flux from 2010 to 2030 under continued development, climate change, status-quo, and tackling scenarios, respectively. The lag effect, soil N release, and climate change delay riverine TN export reductions with respect to decreases in NANI and should be considered in developing and evaluating N management measures.

  7. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Metal Fuel Radionuclide Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-02-01

    The development of an accurate and defensible mechanistic source term will be vital for the future licensing efforts of metal fuel, pool-type sodium fast reactors. To assist in the creation of a comprehensive mechanistic source term, the current effort sought to estimate the release fraction of radionuclides from metal fuel pins to the primary sodium coolant during fuel pin failures at a variety of temperature conditions. These release estimates were based on the findings of an extensive literature search, which reviewed past experimentation and reactor fuel damage accidents. Data sources for each radionuclide of interest were reviewed to establish releasemore » fractions, along with possible release dependencies, and the corresponding uncertainty levels. Although the current knowledge base is substantial, and radionuclide release fractions were established for the elements deemed important for the determination of offsite consequences following a reactor accident, gaps were found pertaining to several radionuclides. First, there is uncertainty regarding the transport behavior of several radionuclides (iodine, barium, strontium, tellurium, and europium) during metal fuel irradiation to high burnup levels. The migration of these radionuclides within the fuel matrix and bond sodium region can greatly affect their release during pin failure incidents. Post-irradiation examination of existing high burnup metal fuel can likely resolve this knowledge gap. Second, data regarding the radionuclide release from molten high burnup metal fuel in sodium is sparse, which makes the assessment of radionuclide release from fuel melting accidents at high fuel burnup levels difficult. This gap could be addressed through fuel melting experimentation with samples from the existing high burnup metal fuel inventory.« less

  8. Methane emissions from oceans, coasts, and freshwater habitats: New perspectives and feedbacks on climate

    USGS Publications Warehouse

    Hamdan, Leila J.; Wickland, Kimberly P.

    2016-01-01

    Methane is a powerful greenhouse gas, and atmospheric concentrations have risen 2.5 times since the beginning of the Industrial age. While much of this increase is attributed to anthropogenic sources, natural sources, which contribute between 35% and 50% of global methane emissions, are thought to have a role in the atmospheric methane increase, in part due to human influences. Methane emissions from many natural sources are sensitive to climate, and positive feedbacks from climate change and cultural eutrophication may promote increased emissions to the atmosphere. These natural sources include aquatic environments such as wetlands, freshwater lakes, streams and rivers, and estuarine, coastal, and marine systems. Furthermore, there are significant marine sediment stores of methane in the form of clathrates that are vulnerable to mobilization and release to the atmosphere from climate feedbacks, and subsurface thermogenic gas which in exceptional cases may be released following accidents and disasters (North Sea blowout and Deepwater Horizon Spill respectively). Understanding of natural sources, key processes, and controls on emission is continually evolving as new measurement and modeling capabilities develop, and different sources and processes are revealed. This special issue of Limnology and Oceanography gathers together diverse studies on methane production, consumption, and emissions from freshwater, estuarine, and marine systems, and provides a broad view of the current science on methane dynamics of aquatic ecosystems. Here, we provide a general overview of aquatic methane sources, their contribution to the global methane budget, and key uncertainties. We then briefly summarize the contributions to and highlights of this special issue.

  9. (135)Cs/(137)Cs isotopic ratio as a new tracer of radiocesium released from the Fukushima nuclear accident.

    PubMed

    Zheng, Jian; Tagami, Keiko; Bu, Wenting; Uchida, Shigeo; Watanabe, Yoshito; Kubota, Yoshihisa; Fuma, Shoichi; Ihara, Sadao

    2014-05-20

    Since the Fukushima Daiichi nuclear power plant (FDNPP) accident in 2011, intensive studies of the distribution of released fission products, in particular (134)Cs and (137)Cs, in the environment have been conducted. However, the release sources, that is, the damaged reactors or the spent fuel pools, have not been identified, which resulted in great variation in the estimated amounts of (137)Cs released. Here, we investigated heavily contaminated environmental samples (litter, lichen, and soil) collected from Fukushima forests for the long-lived (135)Cs (half-life of 2 × 10(6) years), which is usually difficult to measure using decay-counting techniques. Using a newly developed triple-quadrupole inductively coupled plasma tandem mass spectrometry method, we analyzed the (135)Cs/(137)Cs isotopic ratio of the FDNPP-released radiocesium in environmental samples. We demonstrated that radiocesium was mainly released from the Unit 2 reactor. Considering the fact that the widely used tracer for the released Fukushima accident-sourced radiocesium in the environment, the (134)Cs/(137)Cs activity ratio, will become unavailable in the near future because of the short half-life of (134)Cs (2.06 years), the (135)Cs/(137)Cs isotopic ratio can be considered as a new tracer for source identification and long-term estimation of the mobility of released radiocesium in the environment.

  10. Sources of dioxins in the United Kingdom: the steel industry and other sources.

    PubMed

    Anderson, David R; Fisher, Raymond

    2002-01-01

    Several countries have compiled national inventories of dioxin (polychlorinated dibenzo-p-dioxin [PCDD] and polychlorinated dibenzofuran [PCDF]) releases that detail annual mass emission estimates for regulated sources. High temperature processes, such as commercial waste incineration and iron ore sintering used in the production of iron and steel, have been identified as point sources of dioxins. Other important releases of dioxins are from various diffuse sources such as bonfire burning and domestic heating. The PCDD/F inventory for emissions to air in the UK has decreased significantly from 1995 to 1998 because of reduced emissions from waste incinerators which now generally operate at waste gas stack emissions of 1 ng I-TEQ/Nm3 or below. The iron ore sintering process is the only noteworthy source of PCDD/Fs at integrated iron and steelworks operated by Corus (formerly British Steel plc) in the UK. The mean waste gas stack PCDD/F concentration for this process is 1,2 ng I-TEQ/Nm3 based on 94 measurements and it has been estimated that this results in an annual mass release of approximately 38 g I-TEQ per annum. Diffuse sources now form a major contribution to the UK inventory as PCDD/Fs from regulated sources have decreased, for example, the annual celebration of Bonfire Night on 5th November in the UK causes an estimated release of 30 g I-TEQ, similar to that emitted by five sinter plants in the UK.

  11. PULSED ION SOURCE

    DOEpatents

    Martina, E.F.

    1958-10-14

    An improved pulsed ion source of the type where the gas to be ionized is released within the source by momentary heating of an electrode occluded with the gas is presented. The other details of the ion source construction include an electron emitting filament and a positive reference grid, between which an electron discharge is set up, and electrode means for withdrawing the ions from the source. Due to the location of the gas source behind the electrode discharge region, and the positioning of the vacuum exhaust system on the opposite side of the discharge, the released gas is drawn into the electron discharge and ionized in accurately controlled amounts. Consequently, the output pulses of the ion source may be accurately controlled.

  12. The impact of short term synaptic depression and stochastic vesicle dynamics on neuronal variability

    PubMed Central

    Reich, Steven

    2014-01-01

    Neuronal variability plays a central role in neural coding and impacts the dynamics of neuronal networks. Unreliability of synaptic transmission is a major source of neural variability: synaptic neurotransmitter vesicles are released probabilistically in response to presynaptic action potentials and are recovered stochastically in time. The dynamics of this process of vesicle release and recovery interacts with variability in the arrival times of presynaptic spikes to shape the variability of the postsynaptic response. We use continuous time Markov chain methods to analyze a model of short term synaptic depression with stochastic vesicle dynamics coupled with three different models of presynaptic spiking: one model in which the timing of presynaptic action potentials are modeled as a Poisson process, one in which action potentials occur more regularly than a Poisson process (sub-Poisson) and one in which action potentials occur more irregularly (super-Poisson). We use this analysis to investigate how variability in a presynaptic spike train is transformed by short term depression and stochastic vesicle dynamics to determine the variability of the postsynaptic response. We find that sub-Poisson presynaptic spiking increases the average rate at which vesicles are released, that the number of vesicles released over a time window is more variable for smaller time windows than larger time windows and that fast presynaptic spiking gives rise to Poisson-like variability of the postsynaptic response even when presynaptic spike times are non-Poisson. Our results complement and extend previously reported theoretical results and provide possible explanations for some trends observed in recorded data. PMID:23354693

  13. Dysregulation of Corticostriatal Ascorbate Release and Glutamate Uptake in Transgenic Models of Huntington's Disease

    PubMed Central

    2013-01-01

    Abstract Significance: Dysregulation of cortical and striatal neuronal processing plays a critical role in Huntington's disease (HD), a dominantly inherited condition that includes a progressive deterioration of cognitive and motor control. Growing evidence indicates that ascorbate (AA), an antioxidant vitamin, is released into striatal extracellular fluid when glutamate is cleared after its release from cortical afferents. Both AA release and glutamate uptake are impaired in the striatum of transgenic mouse models of HD owing to a downregulation of glutamate transporter 1 (GLT1), the protein primarily found on astrocytes and responsible for removing most extracellular glutamate. Improved understanding of an AA–glutamate interaction could lead to new therapeutic strategies for HD. Recent Advances: Increased expression of GLT1 following treatment with ceftriaxone, a beta-lactam antibiotic, increases striatal glutamate uptake and AA release and also improves the HD behavioral phenotype. In fact, treatment with AA alone restores striatal extracellular AA to wild-type levels in HD mice and not only improves behavior but also improves the firing pattern of neurons in HD striatum. Critical Issues: Although evidence is growing for an AA-glutamate interaction, several key issues require clarification: the site of action of AA on striatal neurons; the precise role of GLT1 in striatal AA release; and the mechanism by which HD interferes with this role. Future Directions: Further assessment of how the HD mutation alters corticostriatal signaling is an important next step. A critical focus is the role of astrocytes, which express GLT1 and may be the primary source of extracellular AA. Antioxid. Redox Signal. 19, 2115–2128. PMID:23642110

  14. Magma chamber cooling by episodic volatile expulsion as constrained by mineral vein distributions in the Butte, Montana Cu-Mo porphyry deposit

    NASA Astrophysics Data System (ADS)

    Daly, K.; Karlstrom, L.; Reed, M. H.

    2016-12-01

    The role of hydrothermal systems in the thermal evolution of magma chambers is poorly constrained yet likely significant. We analyze trends in mineral composition, vein thickness and overall volumetric fluid flux of the Butte, Montana porphyry Cu-Mo deposit to constrain the role of episodic volatile discharge in the crystallization of the source magma chamber ( 300 km3of silicic magma). An aqueous fluid sourced from injection of porphyritic dikes formed the Butte porphyry Cu network of veins. At least three separate pulses of fluid through the system are defined by alteration envelopes of [1] gray sericite (GS); [2] early-dark micaceous (EDM), pale-green sericite (PGS), and dark-green sericite (DGS); and [3] quartz-molybdenite (Qmb) and barren-quartz. Previous research using geothermometers and geobarometers has found that vein mineral composition, inferred temperatures and inferred pressures vary systematically with depth. Later fluid pulses are characterized by lower temperatures, consistent with progressive cooling of the source. We have digitized previously unused structural data from Butte area drill cores, and applied thermomechanical modeling of fluid release from the source magma chamber through time. Vein number density and vein thickness increase with depth as a clear function of mineralogy and thus primary temperature and pressure. We identify structural trends in the three fluid pulses which seem to imply time evolution of average vein characteristics. Pulses of Qmb-barren quartz and EDM-PGS-DGS (1st and 2nd in time) exhibit increasing vein number density (157 & 95 veins/50m, respectively) and thickness (300mm & 120mm, respectively) as a function of depth. EDM-PGS-DGS has a shallower peak in vein density (800m) than Qmb-barren quartz (>1600m). These data provide the basis for idealized mechanical models of hydrofractures, to predict driving pressures and to compare with existing source temperatures and total fluid volumes in order to estimate the total enthalpy of each fluid pulse. We then compare with models for conductive cooling and crystallization of the source magma chamber to estimate the importance of hydrothermal fluid expulsion in the total heat budget. Such models should also provide constraints on the timing and ultimately the origin of pulsed volatile release at Butte.

  15. Overview of major hazards. Part 2: Source term; dispersion; combustion; blast, missiles, venting; fire; radiation; runaway reactions; toxic substances; dust explosions

    NASA Astrophysics Data System (ADS)

    Vilain, J.

    Approaches to major hazard assessment and prediction are reviewed. Source term: (phenomenology/modeling of release, influence on early stages of dispersion); dispersion (atmospheric advection, diffusion and deposition, emphasis on dense/cold gases); combustion (flammable clouds and mists covering flash fires, deflagration, transition to detonation; mostly unconfined/partly confined situations); blast formation, propagation, interaction with structures; catastrophic fires (pool fires, torches and fireballs; highly reactive substances) runaway reactions; features of more general interest; toxic substances, excluding toxicology; and dust explosions (phenomenology and protective measures) are discussed.

  16. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturgeon, Richard W.

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less

  17. 2017 Updates: Earth Gravitational Model 2020

    NASA Astrophysics Data System (ADS)

    Barnes, D. E.; Holmes, S. A.; Ingalls, S.; Beale, J.; Presicci, M. R.; Minter, C.

    2017-12-01

    The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new `Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and shipborne) of gravimetric data over specific geographical areas (Antarctica, Greenland …), will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors. Public release number 15-564

  18. Risk of hydrocyanic acid release in the electroplating industry.

    PubMed

    Piccinini, N; Ruggiero, G N; Baldi, G; Robotto, A

    2000-01-07

    This paper suggests assessing the consequences of hydrocyanic acid (HCN) release into the air by aqueous cyanide solutions in abnormal situations such as the accidental introduction of an acid, or the insertion of a cyanide in a pickling bath. It provides a well-defined source model and its resolution by methods peculiar to mass transport phenomena. The procedure consists of four stages: calculation of the liquid phase concentration, estimate of the HCN liquid-vapour equilibrium, determination of the mass transfer coefficient at the liquid-vapour interface, evaluation of the air concentration of HCN and of the damage distances. The results show that small baths operating at high temperatures are the major sources of risk. The building up of lethal air concentrations, on the other hand, is governed by the values of the mass transfer coefficient, which is itself determined by the flow dynamics and bath geometry. Concerning the magnitude of the risk, the fallout for external emergency planning is slight in all the cases investigated.

  19. The 2016 Al-Mishraq sulphur plant fire: Source and health risk area estimation

    NASA Astrophysics Data System (ADS)

    Björnham, Oscar; Grahn, Håkan; von Schoenberg, Pontus; Liljedahl, Birgitta; Waleij, Annica; Brännström, Niklas

    2017-11-01

    On October 20, 2016, Daesh (Islamic State) set fire to the sulphur production site Al-Mishraq as the battle of Mosul in northern Iraq became more intense. An extensive plume of toxic sulphur dioxide and hydrogen sulphide caused comprehensive casualties. The intensity of the SO2 release was reaching levels of minor volcanic eruptions and the plume was observed by several satellites. By investigation of the measurement data from instruments on the MetOp-A, MetOp-B, Aura and Soumi satellites we have estimated the time-dependent source term to 161 kilotonnes sulphur dioxide released into the atmosphere during seven days. A long-range dispersion model was utilized to simulate the atmospheric transport over the Middle East. The ground level concentrations predicted by the simulation were compared with observation from the Turkey National Air Quality Monitoring Network. Finally, the simulation data provided, using a probit analysis of the simulated data, an estimate of the health risk area that was compared to reported urgent medical treatments.

  20. Nickel and cobalt release from children's toys purchased in Denmark and the United States.

    PubMed

    Jensen, Peter; Hamann, Dathan; Hamann, Carsten R; Jellesen, Morten S; Jacob, Sharon E; Thyssen, Jacob P

    2014-01-01

    Nickel is the most common allergen detected by patch testing in children. There is an increasing number of cases in children who have not had exposure to piercing. Although the clinical relevance of nickel patch test reactions in children is sometimes uncertain, continued vigilance to identify new sources of nickel exposure in this age group is important. Recent case reports have described allergic nickel contact dermatitis in children following exposure to toys, but the magnitude of this problem is unknown. The aim of this study was to evaluate nickel and cobalt release from children's toys. We purchased 212 toys in 18 different retail and online stores in the United States and Denmark. Nickel and cobalt release was tested using the dimethylglyoxime and cobalt screening spot tests. A total of 73 toys (34.4%) released nickel, and none released cobalt. Toys are a commonly overlooked source of nickel exposure and sensitization. Therefore, dermatologists, allergists, and pediatricians should consider the role of toys in their evaluation of children with dermatitis, and the parents of children with positive nickel patch test reactions should be told that toys may release nickel and be a potential chemical source in the manifestation of allergic contact dermatitis.

  1. Generation and Evolution of Internal Waves in Luzon Strait

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Generation and Evolution of Internal Waves in Luzon...inertial waves , nonlinear internal waves (NLIWs), and turbulence mixing––in the ocean and thereby help develop improved parameterizations of mixing for...ocean models. Mixing within the stratified ocean is a particular focus as the complex interplay of internal waves from a variety of sources and

  2. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  3. Helium abundance and speed difference between helium ions and protons in the solar wind from coronal holes, active regions, and quiet Sun

    NASA Astrophysics Data System (ADS)

    Fu, Hui; Madjarska, M. S.; Li, Bo; Xia, LiDong; Huang, ZhengHua

    2018-05-01

    Two main models have been developed to explain the mechanisms of release, heating and acceleration of the nascent solar wind, the wave-turbulence-driven (WTD) models and reconnection-loop-opening (RLO) models, in which the plasma release processes are fundamentally different. Given that the statistical observational properties of helium ions produced in magnetically diverse solar regions could provide valuable information for the solar wind modelling, we examine the statistical properties of the helium abundance (AHe) and the speed difference between helium ions and protons (vαp) for coronal holes (CHs), active regions (ARs) and the quiet Sun (QS). We find bimodal distributions in the space of AHeand vαp/vA(where vA is the local Alfvén speed) for the solar wind as a whole. The CH wind measurements are concentrated at higher AHeand vαp/vAvalues with a smaller AHedistribution range, while the AR and QS wind is associated with lower AHeand vαp/vA, and a larger AHedistribution range. The magnetic diversity of the source regions and the physical processes related to it are possibly responsible for the different properties of AHeand vαp/vA. The statistical results suggest that the two solar wind generation mechanisms, WTD and RLO, work in parallel in all solar wind source regions. In CH regions WTD plays a major role, whereas the RLO mechanism is more important in AR and QS.

  4. Source process of the 2016 Kumamoto earthquake (Mj7.3) inferred from kinematic inversion of strong-motion records

    NASA Astrophysics Data System (ADS)

    Yoshida, Kunikazu; Miyakoshi, Ken; Somei, Kazuhiro; Irikura, Kojiro

    2017-05-01

    In this study, we estimated source process of the 2016 Kumamoto earthquake from strong-motion data by using the multiple-time window linear kinematic waveform inversion method to discuss generation of strong motions and to explain crustal deformation pattern with a seismic source inversion model. A four-segment fault model was assumed based on the aftershock distribution, active fault traces, and interferometric synthetic aperture radar data. Three western segments were set to be northwest-dipping planes, and the most eastern segment under the Aso caldera was examined to be a southeast-dipping plane. The velocity structure models used in this study were estimated by using waveform modeling of moderate earthquakes that occurred in the source region. We applied a two-step approach of the inversions of 20 strong-motion datasets observed by K-NET and KiK-net by using band-pass-filtered strong-motion data at 0.05-0.5 Hz and then at 0.05-1.0 Hz. The rupture area of the fault plane was determined by applying the criterion of Somerville et al. (Seismol Res Lett 70:59-80, 1999) to the inverted slip distribution. From the first-step inversion, the fault length was trimmed from 52 to 44 km, whereas the fault width was kept at 18 km. The trimmed rupture area was not changed in the second-step inversion. The source model obtained from the two-step approach indicated 4.7 × 1019 Nm of the total moment release and 1.8 m average slip of the entire fault with a rupture area of 792 km2. Large slip areas were estimated in the seismogenic zone and in the shallow part corresponding to the surface rupture that occurred during the Mj7.3 mainshock. The areas of the high peak moment rate correlated roughly with those of large slip; however, the moment rate functions near the Earth surface have low peak, bell shape, and long duration. These subfaults with long-duration moment release are expected to cause weak short-period ground motions. We confirmed that the southeast dipping of the most eastern segment is more plausible rather than northwest-dipping from the observed subsidence around the central cones of the Aso volcano.[Figure not available: see fulltext.

  5. Episodic Tremor and Slip (ETS) as a chaotic multiphysics spring

    NASA Astrophysics Data System (ADS)

    Veveakis, E.; Alevizos, S.; Poulet, T.

    2017-03-01

    Episodic Tremor and Slip (ETS) events display a rich behaviour of slow and accelerated slip with simple oscillatory to complicated chaotic time series. It is commonly believed that the fast events appearing as non volcanic tremors are signatures of deep fluid injection. The fluid source is suggested to be related to the breakdown of hydrous phyllosilicates, mainly the serpentinite group minerals such as antigorite or lizardite that are widespread in the top of the slab in subduction environments. Similar ETS sequences are recorded in different lithologies in exhumed crustal carbonate-rich thrusts where the fluid source is suggested to be the more vigorous carbonate decomposition reaction. If indeed both types of events can be understood and modelled by the same generic fluid release reaction AB(solid) ⇌A(solid) +B(fluid) , the data from ETS sequences in subduction zones reveal a geophysically tractable temporal evolution with no access to the fault zone. This work reviews recent advances in modelling ETS events considering the multiphysics instabilities triggered by the fluid release reaction and develops a thermal-hydraulic-mechanical-chemical oscillator (THMC spring) model for such mineral reactions (like dehydration and decomposition) in Megathrusts. We describe advanced computational methods for THMC instabilities and discuss spectral element and finite element solutions. We apply the presented numerical methods to field examples of this important mechanism and reproduce the temporal signature of the Cascadia and Hikurangi trench with a serpentinite oscillator.

  6. Glutamate-mediated excitotoxicity in neonatal hippocampal neurons is mediated by mGluR-induced release of Ca++ from intracellular stores and is prevented by estradiol

    PubMed Central

    Hilton, Genell D.; Nunez, Joseph L.; Bambrick, Linda; Thompson, Scott M.; McCarthy, Margaret M.

    2008-01-01

    Hypoxic/ischemic (HI) brain injury in newborn full-term and premature infants is a common and pervasive source of life time disabilities in cognitive and locomotor function. In the adult, HI induces glutamate release and excitotoxic cell death dependent on NMDA receptor activation. In animal models of the premature human infant, glutamate is also released following HI, but neurons are largely insensitive to NMDA or AMPA/kainic acid (KA) receptor-mediated damage. Using primary cultured hippocampal neurons we have determined that glutamate increases intracellular calcium much more than kainic acid. Moreover, glutamate induces cell death by activating Type I metabotropic glutamate receptors (mGluRs). Pretreatment of neurons with the gonadal steroid estradiol reduces the level of the Type I metabotropic glutamate receptors and completely prevents cell death, suggesting a novel therapeutic approach to excitotoxic brain damage in the neonate. PMID:17156362

  7. Axial deformed solution of the Skyrme-Hartree-Fock-Bogolyubov equations using the transformed harmonic oscillator Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, R. Navarro; Schunck, N.; Lasseri, R.

    2017-03-09

    HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the nuclear energy Density Functional Theory (DFT), where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton densities. In HFBTHO, the energy density derives either from the zero-range Dkyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-Bogoliubov (HFB) approximation, and axial-symmetry of the nuclear shape is assumed. This version is the 3rd release ofmore » the program; the two previous versions were published in Computer Physics Communications [1,2]. The previous version was released at LLNL under GPL 3 Open Source License and was given release code LLNL-CODE-573953.« less

  8. International challenge to predict the impact of radioxenon releases from medical isotope production on a comprehensive nuclear test ban treaty sampling station.

    PubMed

    Eslinger, Paul W; Bowyer, Ted W; Achim, Pascal; Chai, Tianfeng; Deconninck, Benoit; Freeman, Katie; Generoso, Sylvia; Hayes, Philip; Heidmann, Verena; Hoffman, Ian; Kijima, Yuichi; Krysta, Monika; Malo, Alain; Maurer, Christian; Ngan, Fantine; Robins, Peter; Ross, J Ole; Saunier, Olivier; Schlosser, Clemens; Schöppner, Michael; Schrom, Brian T; Seibert, Petra; Stein, Ariel F; Ungar, Kurt; Yi, Jing

    2016-06-01

    The International Monitoring System (IMS) is part of the verification regime for the Comprehensive Nuclear-Test-Ban-Treaty Organization (CTBTO). At entry-into-force, half of the 80 radionuclide stations will be able to measure concentrations of several radioactive xenon isotopes produced in nuclear explosions, and then the full network may be populated with xenon monitoring afterward. An understanding of natural and man-made radionuclide backgrounds can be used in accordance with the provisions of the treaty (such as event screening criteria in Annex 2 to the Protocol of the Treaty) for the effective implementation of the verification regime. Fission-based production of (99)Mo for medical purposes also generates nuisance radioxenon isotopes that are usually vented to the atmosphere. One of the ways to account for the effect emissions from medical isotope production has on radionuclide samples from the IMS is to use stack monitoring data, if they are available, and atmospheric transport modeling. Recently, individuals from seven nations participated in a challenge exercise that used atmospheric transport modeling to predict the time-history of (133)Xe concentration measurements at the IMS radionuclide station in Germany using stack monitoring data from a medical isotope production facility in Belgium. Participants received only stack monitoring data and used the atmospheric transport model and meteorological data of their choice. Some of the models predicted the highest measured concentrations quite well. A model comparison rank and ensemble analysis suggests that combining multiple models may provide more accurate predicted concentrations than any single model. None of the submissions based only on the stack monitoring data predicted the small measured concentrations very well. Modeling of sources by other nuclear facilities with smaller releases than medical isotope production facilities may be important in understanding how to discriminate those releases from releases from a nuclear explosion. Published by Elsevier Ltd.

  9. VizieR Online Data Catalog: KiDS-ESO-DR2 multi-band source catalog (de Jong+, 2015)

    NASA Astrophysics Data System (ADS)

    de Jong, J. T. A.; Verdoes Kleijn, G. A.; Boxhoorn, D. R.; Buddelmeijer, H.; Capaccioli, M.; Getman, F.; Grado, A.; Helmich, E.; Huang, Z.; Irisarri, N.; Kuijken, K.; La Barbera, F.; McFarland, J. P.; Napolitano, N. R.; Radovich, M.; Sikkema, G.; Valentijn, E. A.; Begeman, K. G.; Brescia, M.; Cavuoti, S.; Choi, A.; Cordes, O.-M.; Covone, G.; Dall'Ora, M.; Hildebrandt, H.; Longo, G.; Nakajima, R.; Paolillo, M.; Puddu, E.; Rifatto, A.; Tortora, C.; van Uitert, E.; Buddendiek, A.; Harnois-Deraps, J.; Erben, T.; Eriksen, M. B.; Heymans, C.; Hoekstra, H.; Joachimi, B.; Kitching, T. D.; Klaes, D.; Koopmans, L. V. E.; Koehlinger, F.; Roy, N.; Sifon, C.; Schneider, P.; Sutherland, W. J.; Viola, M.; Vriend, W.-J.

    2016-10-01

    KiDS data releases consist of ~1 square degree tiles that have been successfully observed in all four survey filters (u,g,r,i). The second data release (KiDS-ESO-DR2) was available in February 2015 and contains imaging data, masks and single-band source lists for all tiles observed in all four filters for which observations were completed during the second year of regular operations (1 October 2012 to 31 September 2013), a total of 98 tiles. Apart from the data products mentioned above, KiDS-ESO-DR2 also provides a multi-band source catalogue based on the combined set of 148 tiles released in the first two data releases. A complete list of all tiles with data quality parameters can be found on the KiDS website: http://kids.strw.leidenuniv.nl/DR2/ (1 data file).

  10. CosmoQuest Transient Tracker: Opensource Photometry & Astrometry software

    NASA Astrophysics Data System (ADS)

    Myers, Joseph L.; Lehan, Cory; Gay, Pamela; Richardson, Matthew; CosmoQuest Team

    2018-01-01

    CosmoQuest is moving from online citizen science, to observational astronomy with the creation of Transient Trackers. This open source software is designed to identify asteroids and other transient/variable objects in image sets. Transient Tracker’s features in final form will include: astrometric and photometric solutions, identification of moving/transient objects, identification of variable objects, and lightcurve analysis. In this poster we present our initial, v0.1 release and seek community input.This software builds on the existing NIH funded ImageJ libraries. Creation of this suite of opensource image manipulation routines is lead by Wayne Rasband and is released primarily under the MIT license. In this release, we are building on these libraries to add source identification for point / point-like sources, and to do astrometry. Our materials released under the Apache 2.0 license on github (http://github.com/CosmoQuestTeam) and documentation can be found at http://cosmoquest.org/TransientTracker.

  11. Fluctuations of global energy release and crackling in nominally brittle heterogeneous fracture.

    PubMed

    Barés, J; Hattali, M L; Dalmas, D; Bonamy, D

    2014-12-31

    The temporal evolution of mechanical energy and spatially averaged crack speed are both monitored in slowly fracturing artificial rocks. Both signals display an irregular burstlike dynamics, with power-law distributed fluctuations spanning a broad range of scales. Yet, the elastic power released at each time step is proportional to the global velocity all along the process, which enables defining a material-constant fracture energy. We characterize the intermittent dynamics by computing the burst statistics. This latter displays the scale-free features signature of crackling dynamics, in qualitative but not quantitative agreement with the depinning interface models derived for fracture problems. The possible sources of discrepancies are pointed out and discussed.

  12. Materials for storage and release of hydrogen and methods for preparing and using same

    DOEpatents

    Autrey, Thomas S [West Richland, WA; Gutowska, Anna [Richland, WA; Shin, Yongsoon [Richland, WA; Li, Liyu [Richland, WA

    2008-01-08

    The invention relates to materials for storing and releasing hydrogen and methods for preparing and using same. The materials exhibit fast release rates at low release temperatures and are suitable as fuel and/or hydrogen sources for a variety of applications such as automobile engines.

  13. Expanding the capability of reaction-diffusion codes using pseudo traps and temperature partitioning: Applied to hydrogen uptake and release from tungsten

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.

    Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less

  14. Expanding the capability of reaction-diffusion codes using pseudo traps and temperature partitioning: Applied to hydrogen uptake and release from tungsten

    DOE PAGES

    Simmonds, M. J.; Yu, J. H.; Wang, Y. Q.; ...

    2018-06-04

    Simulating the implantation and thermal desorption evolution in a reaction-diffusion model requires solving a set of coupled differential equations that describe the trapping and release of atomic species in Plasma Facing Materials (PFMs). These fundamental equations are well outlined by the Tritium Migration Analysis Program (TMAP) which can model systems with no more than three active traps per atomic species. To overcome this limitation, we have developed a Pseudo Trap and Temperature Partition (PTTP) scheme allowing us to lump multiple inactive traps into one pseudo trap, simplifying the system of equations to be solved. For all temperatures, we show themore » trapping of atoms from solute is exactly accounted for when using a pseudo trap. However, a single effective pseudo trap energy can not well replicate the release from multiple traps, each with its own detrapping energy. However, atoms held in a high energy trap will remain trapped at relatively low temperatures, and thus there is a temperature range in which release from high energy traps is effectively inactive. By partitioning the temperature range into segments, a pseudo trap can be defined for each segment to account for multiple high energy traps that are actively trapping but are effectively not releasing atoms. With increasing temperature, as in controlled thermal desorption, the lowest energy trap is nearly emptied and can be removed from the set of coupled equations, while the next higher energy trap becomes an actively releasing trap. Each segment is thus calculated sequentially, with the last time step of a given segment solution being used as an initial input for the next segment as only the pseudo and actively releasing traps are modeled. This PTTP scheme is then applied to experimental thermal desorption data for tungsten (W) samples damaged with heavy ions, which display six distinct release peaks during thermal desorption. Without modifying the TMAP7 source code the PTTP scheme is shown to successfully model the D retention in all six traps. In conclusion, we demonstrate the full reconstruction from the plasma implantation phase through the controlled thermal desorption phase with detrapping energies near 0.9, 1.1, 1.4, 1.7, 1.9 and 2.1 eV for a W sample damaged at room temperature.« less

  15. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    NASA Astrophysics Data System (ADS)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  16. International challenge to model the long-range transport of radioxenon released from medical isotope production to six Comprehensive Nuclear-Test-Ban Treaty monitoring stations

    DOE PAGES

    Maurer, Christian; Baré, Jonathan; Kusmierczyk-Michulec, Jolanta; ...

    2018-03-08

    After performing a first multi-model exercise in 2015 a comprehensive and technically more demanding atmospheric transport modelling challenge was organized in 2016. Release data were provided by the Australian Nuclear Science and Technology Organization radiopharmaceutical facility in Sydney (Australia) for a one month period. Measured samples for the same time frame were gathered from six International Monitoring System stations in the Southern Hemisphere with distances to the source ranging between 680 (Melbourne) and about 17,000 km (Tristan da Cunha). Participants were prompted to work with unit emissions in pre-defined emission intervals (daily, half-daily, 3-hourly and hourly emission segment lengths) andmore » in order to perform a blind test actual emission values were not provided to them. Despite the quite different settings of the two atmospheric transport modelling challenges there is common evidence that for long-range atmospheric transport using temporally highly resolved emissions and highly space-resolved meteorological input fields has no significant advantage compared to using lower resolved ones. As well an uncertainty of up to 20% in the daily stack emission data turns out to be acceptable for the purpose of a study like this. Model performance at individual stations is quite diverse depending largely on successfully capturing boundary layer processes. No single model-meteorology combination performs best for all stations. Moreover, the stations statistics do not depend on the distance between the source and the individual stations. Finally, it became more evident how future exercises need to be designed. Set-up parameters like the meteorological driver or the output grid resolution should be pre-scribed in order to enhance diversity as well as comparability among model runs.« less

  17. International challenge to model the long-range transport of radioxenon released from medical isotope production to six Comprehensive Nuclear-Test-Ban Treaty monitoring stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Christian; Baré, Jonathan; Kusmierczyk-Michulec, Jolanta

    After performing a first multi-model exercise in 2015 a comprehensive and technically more demanding atmospheric transport modelling challenge was organized in 2016. Release data were provided by the Australian Nuclear Science and Technology Organization radiopharmaceutical facility in Sydney (Australia) for a one month period. Measured samples for the same time frame were gathered from six International Monitoring System stations in the Southern Hemisphere with distances to the source ranging between 680 (Melbourne) and about 17,000 km (Tristan da Cunha). Participants were prompted to work with unit emissions in pre-defined emission intervals (daily, half-daily, 3-hourly and hourly emission segment lengths) andmore » in order to perform a blind test actual emission values were not provided to them. Despite the quite different settings of the two atmospheric transport modelling challenges there is common evidence that for long-range atmospheric transport using temporally highly resolved emissions and highly space-resolved meteorological input fields has no significant advantage compared to using lower resolved ones. As well an uncertainty of up to 20% in the daily stack emission data turns out to be acceptable for the purpose of a study like this. Model performance at individual stations is quite diverse depending largely on successfully capturing boundary layer processes. No single model-meteorology combination performs best for all stations. Moreover, the stations statistics do not depend on the distance between the source and the individual stations. Finally, it became more evident how future exercises need to be designed. Set-up parameters like the meteorological driver or the output grid resolution should be pre-scribed in order to enhance diversity as well as comparability among model runs.« less

  18. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  19. EVALUATING TERRESTRIAL FOOD CHAIN IMPACTS NEAR SOURCES OF DIOXIN RELEASE IN U.S. EPA RISK ASSESSMENTS

    EPA Science Inventory

    Prior to the mid 1980s, assessments of health impacts from dioxin-like compounds released into the air only evaluated the inhalation exposure pathway. In the latter 1980s it was demonstrated that consumption of animal food products is the principal source of exposure to dioxin...

  20. Database of Sources of Environmental Releases of Dioxin-Like Compounds in the United States

    EPA Science Inventory

    The Database of Sources of Environmental Releases of Dioxin-like Compounds in the United States (US)

  1. Evaluation of the source area of rooftop scalar measurements in London, UK using wind tunnel and modelling approaches.

    NASA Astrophysics Data System (ADS)

    Brocklehurst, Aidan; Boon, Alex; Barlow, Janet; Hayden, Paul; Robins, Alan

    2014-05-01

    The source area of an instrument is an estimate of the area of ground over which the measurement is generated. Quantification of the source area of a measurement site provides crucial context for analysis and interpretation of the data. A range of computational models exists to calculate the source area of an instrument, but these are usually based on assumptions which do not hold for instruments positioned very close to the surface, particularly those surrounded by heterogeneous terrain i.e. urban areas. Although positioning instrumentation at higher elevation (i.e. on masts) is ideal in urban areas, this can be costly in terms of installation and maintenance costs and logistically difficult to position instruments in the ideal geographical location. Therefore, in many studies, experimentalists turn to rooftops to position instrumentation. Experimental validations of source area models for these situations are very limited. In this study, a controlled tracer gas experiment was conducted in a wind tunnel based on a 1:200 scale model of a measurement site used in previous experimental work in central London. The detector was set at the location of the rooftop site as the tracer was released at a range of locations within the surrounding streets and rooftops. Concentration measurements are presented for a range of wind angles, with the spread of concentration measurements indicative of the source area distribution. Clear evidence of wind channeling by streets is seen with the shape of the source area strongly influenced by buildings upwind of the measurement point. The results of the wind tunnel study are compared to scalar concentration source areas generated by modelling approaches based on meteorological data from the central London experimental site and used in the interpretation of continuous carbon dioxide (CO2) concentration data. Initial conclusions will be drawn as to how to apply scalar concentration source area models to rooftop measurement sites and suggestions for their improvement to incorporate effects such as channeling.

  2. The 1991 October 24 flare: A challenge for standard models

    NASA Technical Reports Server (NTRS)

    Beaujardiere, J.-F. De LA; Canfield, R. C.; Hudson, H. S.; Wulser, J.-P.; Acton, L.; Kosugi, T.; Masuda, S.

    1995-01-01

    The M9.8 solar flare of 1991 October 24 22:30 UT presents several interesting characteristics: (1) energy release starts high in the corona; (2) the primary chromospheric ribbons are initially well separated and do not move apart at an observable rate; (3) no evidence is found for an erupting filament or other driver. To explain this flare, we consider several canonical flare models, including a filament eruption, a confined filament eruption, current interruption, and interacting loops. We conclude that none of these scenarios unequivocally explains this flare. Two possibilities which cannot be ruled out are (1) the eruption of a filament unobservable in H-alpha which starts high in the corona and produces no ribbon motions smaller than our detection threshold and no perceptible expansion of the coronal X-ray source, and (2) energy release due to spontaneous, propagating reconnection which allows the system to essentially brighten in place.

  3. Using Remote Sensing Data to Update a Dynamic Regional-Scale Water Quality Model

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Nolin, A.; Brakebill, J.; Sproles, E.; Macauley, M.

    2012-04-01

    Regional scale SPARROW models, used by the US Geological Survey, relate watershed characteristics to in stream water quality. SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models are steady-state models and describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially referenced explanatory information. However, many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions, which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. Here, we describe a dynamically calibrated SPARROW model of total nitrogen flux in the Potomac River Basin based on seasonal water quality and watershed input data for 80 monitoring stations over the period 2000 to 2008. One challenge in dynamic modeling of reactive nitrogen is obtaining spatially detailed and sufficiently frequent input data on the phenology of agricultural production and terrestrial vegetation. We use the Enhanced Vegetation Index (EVI) and gross primary productivity data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite to parameterize seasonal uptake and release of nitrogen. The spatial reference frame of the model is a 16,000-reach, 1:100,000-scale stream network, and the computational time step is seasonal. Precipitation and temperature data are from the PRISM gridded data set, augmented with snow frequency derived from MODIS. The model formulation allows for separate storage compartments for nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Removal of nitrogen from watershed storage to stream channels and to "permanent" sinks (deep groundwater and the atmosphere) occur as parallel first-order processes. We use the model to explore an important issue in nutrient management in the Potomac and other basins: the long-term response of total nitrogen flux to changing climate. We model the nitrogen flux response to projected seasonal and inter-annual changes in temperature and precipitation, but under current seasonal nitrogen inputs, as indicated by MODIS measures of productivity. Under these constant inter-annual inputs, changing temperature and precipitation are predicted to lead to flux changes as temporary basin stores of nitrogen either grow or shrink due to changing relative rates of nitrogen removal to the atmosphere and release to streams.

  4. NESHAP Dose-Release Factor Isopleths for Five Source-to-Receptor Distances from the Center of Site and H-Area for all Compass Sectors at SRS using CAP88-PC Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trimor, P.

    The Environmental Protection Agency (EPA) requires the use of the computer model CAP88-PC to estimate the total effective doses (TED) for demonstrating compliance with 40 CFR 61, Subpart H (EPA 2006), the National Emission Standards for Hazardous Air Pollutants (NESHAP) regulations. As such, CAP88 Version 4.0 was used to calculate the receptor dose due to routine atmospheric releases at the Savannah River Site (SRS). For estimation, NESHAP dose-release factors (DRFs) have been supplied to Environmental Compliance and Area Closure Projects (EC&ACP) for many years. DRFs represent the dose to a maximum receptor exposed to 1 Ci of a specified radionuclidemore » being released into the atmosphere. They are periodically updated to include changes in the CAP88 version, input parameter values, site meteorology, and location of the maximally exposed individual (MEI). This report presents the DRFs of tritium oxide released at two onsite locations, center-of-site (COS) and H-Area, at 0 ft. elevation to maximally exposed individuals (MEIs) located 1000, 3000, 6000, 9000, and 12000 meters from the release areas for 16 compass sectors. The analysis makes use of area-specific meteorological data (Viner 2014).« less

  5. Calculation of Tectonic Strain Release from an Explosion in a Three-Dimensional Stress Field

    NASA Astrophysics Data System (ADS)

    Stevens, J. L.; O'Brien, M. S.

    2012-12-01

    We have developed a 3D nonlinear finite element code designed for calculation of explosions in 3D heterogeneous media and have incorporated the capability to perform explosion calculations in a prestressed medium. The effect of tectonic prestress on explosion-generated surface waves has been discussed since the 1960's. In most of these studies tectonic release was described as superposition of a tectonic source modeled as a double couple, multipole or moment tensor, plus a point explosion source. The size of the tectonic source was determined by comparison with the observed Love waves and the Rayleigh wave radiation pattern. Day et al. (1987) first attempted to perform numerical modeling of tectonic release through an axisymmetric calculation of the explosion Piledriver. To the best of our knowledge no one has previously performed numerical calculations for an explosion in a three-dimensional stress field. Calculation of tectonic release depends on a realistic representation of the stress state in the earth. In general the vertical stress is equal to the overburden weight of the material above at any given point. The horizontal stresses may be larger or smaller than this value up to the point where failure due to frictional sliding relieves the stress. In our calculations, we use the normal overburden calculation to determine the vertical stress, and then modify the horizontal stresses to some fraction of the frictional limit. This is the initial stable state of the calculation prior to introduction of the explosion. Note that although the vertical stress is still equivalent to the overburden weight, the pressure is not, and it may be either increased or reduced by the tectonic stresses. Since material strength increases with pressure, this also can substantially affect the seismic source. In general, normal faulting regimes will amplify seismic signals, while reverse faulting regimes will decrease seismic signals; strike-slip regimes may do either. We performed a 3D calculation of the Shoal underground nuclear explosion including tectonic prestress. Shoal was a 12.5 kiloton nuclear explosion detonated near Fallon, Nevada. This event had strong heterogeneity in near field waveforms and is in a region under primarily extensional tectonic stress. There were three near-field shot level recording stations located in three directions each at about 590 meters from the shot. Including prestress consistent with the regional stress field causes variations in the calculated near-field waveforms similar to those observed in the Shoal data.

  6. 2015 TRI National Analysis: Toxics Release Inventory Releases at Various Summary Levels

    EPA Pesticide Factsheets

    The TRI National Analysis is EPA's annual interpretation of TRI data at various summary levels. It highlights how toxic chemical wastes were managed, where toxic chemicals were released and how the 2015 TRI data compare to data from previous years. This dataset reports US state, county, large aquatic ecosystem, metro/micropolitan statistical area, and facility level statistics from 2015 TRI releases, including information on: number of 2015 TRI facilities in the geographic area and their releases (total, water, air, land); population information, including populations living within 1 mile of TRI facilities (total, minority, in poverty); and Risk Screening Environmental Indicators (RSEI) model related pounds, toxicity-weighted pounds, and RSEI score. The source of administrative boundary data is the 2013 cartographic boundary shapefiles. Location of facilities is provided by EPA's Facility Registry Service (FRS). Large Aquatic Ecosystems boundaries were dissolved from the hydrologic unit boundaries and codes for the United States, Puerto Rico, and the U.S. Virgin Islands. It was revised for inclusion in the National Atlas of the United States of America (November 2002), and updated to match the streams file created by the USGS National Mapping Division (NMD) for the National Atlas of the United States of America.

  7. Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez

    2011-04-01

    The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.

  8. VizieR Online Data Catalog: KiDS-ESO-DR3 multi-band source catalog (de Jong+, 2017)

    NASA Astrophysics Data System (ADS)

    de Jong, J. T. A.; Verdoes Kleijn, G. A.; Erben, T.; Hildebrandt, H.; Kuijken, K.; Sikkema, G.; Brescia, M.; Bilicki, M.; Napolitano, N. R.; Amaro, V.; Begeman, K. G.; Boxhoorn, D. R.; Buddelmeijer, H.; Cavuoti, S.; Getman, F.; Grado, A.; Helmich, E.; Huang, Z.; Irisarri, N.; La Barbera, F.; Longo, G.; McFarland, J. P.; Nakajima, R.; Paolillo, M.; Puddu, E.; Radovich, M.; Rifatto, A.; Tortora, C; Valentijn, E. A.; Vellucci, C.; Vriend, W-J.; Amon, A.; Blake, C.; Choi, A.; Fenech, Conti I.; Herbonnet, R.; Heymans, C.; Hoekstra, H.; Klaes, D.; Merten, J.; Miller, L.; Schneider, P.; Viola, M.

    2017-04-01

    KiDS-ESO-DR3 contains a multi-band source catalogue encompassing all publicly released tiles, a total of 440 survey tiles including the coadded images, weight maps, masks and source lists of 292 survey tiles of KiDS-ESO-DR3, adding to the 148 tiles released previously (50 in KiDS-ESO-DR1 and 98 in KiDS-ESO-DR2). (1 data file).

  9. Forward Field Computation with OpenMEEG

    PubMed Central

    Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen

    2011-01-01

    To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231

  10. The use of open data from social media for the creation of 3D georeferenced modeling

    NASA Astrophysics Data System (ADS)

    Themistocleous, Kyriacos

    2016-08-01

    There is a great deal of open source video on the internet that is posted by users on social media sites. With the release of low-cost unmanned aerial vehicles, many hobbyists are uploading videos from different locations, especially in remote areas. Using open source data that is available on the internet, this study utilized structure to motion (SfM) as a range imaging technique to estimate 3 dimensional landscape features from 2 dimensional image sequences subtracted from video, applied image distortion correction and geo-referencing. This type of documentation may be necessary for cultural heritage sites that are inaccessible or documentation is difficult, where we can access video from Unmanned Aerial Vehicles (UAV). These 3D models can be viewed using Google Earth, create orthoimage, drawings and create digital terrain modeling for cultural heritage and archaeological purposes in remote or inaccessible areas.

  11. RADSOURCE. Volume 1, Part 1, A scaling factor prediction computer program technical manual and code validation: Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vance, J.N.; Holderness, J.H.; James, D.W.

    1992-12-01

    Waste stream scaling factors based on sampling programs are vulnerable to one or more of the following factors: sample representativeness, analytic accuracy, and measurement sensitivity. As an alternative to sample analyses or as a verification of the sampling results, this project proposes the use of the RADSOURCE code, which accounts for the release of fuel-source radionuclides. Once the release rates of these nuclides from fuel are known, the code develops scaling factors for waste streams based on easily measured Cobalt-60 (Co-60) and Cesium-137 (Cs-137). The project team developed mathematical models to account for the appearance rate of 10CFR61 radionuclides inmore » reactor coolant. They based these models on the chemistry and nuclear physics of the radionuclides involved. Next, they incorporated the models into a computer code that calculates plant waste stream scaling factors based on reactor coolant gamma- isotopic data. Finally, the team performed special sampling at 17 reactors to validate the models in the RADSOURCE code.« less

  12. Nitrates in drinking water: relation with intensive livestock production.

    PubMed

    Giammarino, M; Quatto, P

    2015-01-01

    An excess of nitrates causes environmental pollution in receiving water bodies and health risk for human, if contaminated water is source of drinking water. The directive 91/676/ CEE [1] aims to reduce the nitrogen pressure in Europe from agriculture sources and identifies the livestock population as one of the predominant sources of surplus of nutrients that could be released in water and air. Directive is concerned about cattle, sheep, pigs and poultry and their territorial loads, but it does not deal with fish farms. Fish farms effluents may contain pollutants affecting ecosystem water quality. On the basis of multivariate statistical analysis, this paper aims to establish what types of farming affect the presence of nitrates in drinking water in the province of Cuneo, Piedmont, Italy. In this regard, we have used data from official sources on nitrates in drinking water and data Arvet database, concerning the presence of intensive farming in the considered area. For model selection we have employed automatic variable selection algorithm. We have identified fish farms as a major source of nitrogen released into the environment, while pollution from sheep and poultry has appeared negligible. We would like to emphasize the need to include in the "Nitrate Vulnerable Zones" (as defined in Directive 91/676/CEE [1]), all areas where there are intensive farming of fish with open-system type of water use. Besides, aquaculture open-system should be equipped with adequate downstream system of filtering for removing nitrates in the wastewater.

  13. PubMed Central

    QUATTO, P.

    2015-01-01

    Summary Introduction. An excess of nitrates causes environmental pollution in receiving water bodies and health risk for human, if contaminated water is source of drinking water. The directive 91/676/ CEE [1] aims to reduce the nitrogen pressure in Europe from agriculture sources and identifies the livestock population as one of the predominant sources of surplus of nutrients that could be released in water and air. Directive is concerned about cattle, sheep, pigs and poultry and their territorial loads, but it does not deal with fish farms. Fish farms effluents may contain pollutants affecting ecosystem water quality. Methods. On the basis of multivariate statistical analysis, this paper aims to establish what types of farming affect the presence of nitrates in drinking water in the province of Cuneo, Piedmont, Italy. In this regard, we have used data from official sources on nitrates in drinking water and data Arvet database, concerning the presence of intensive farming in the considered area. For model selection we have employed automatic variable selection algorithm. Results and discussion. We have identified fish farms as a major source of nitrogen released into the environment, while pollution from sheep and poultry has appeared negligible. We would like to emphasize the need to include in the "Nitrate Vulnerable Zones" (as defined in Directive 91/676/CEE [1]), all areas where there are intensive farming of fish with open-system type of water use. Besides, aquaculture open-system should be equipped with adequate downstream system of filtering for removing nitrates in the wastewater. PMID:26900335

  14. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  15. Press releases by academic medical centers: not so academic?

    PubMed

    Woloshin, Steven; Schwartz, Lisa M; Casella, Samuel L; Kennedy, Abigail T; Larson, Robin J

    2009-05-05

    The news media are often criticized for exaggerated coverage of weak science. Press releases, a source of information for many journalists, might be a source of those exaggerations. To characterize research press releases from academic medical centers. Content analysis. Press releases from 10 medical centers at each extreme of U.S. News & World Report's rankings for medical research. Press release quality. Academic medical centers issued a mean of 49 press releases annually. Among 200 randomly selected releases analyzed in detail, 87 (44%) promoted animal or laboratory research, of which 64 (74%) explicitly claimed relevance to human health. Among 95 releases about primary human research, 22 (23%) omitted study size and 32 (34%) failed to quantify results. Among all 113 releases about human research, few (17%) promoted studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on the most limited human studies--those with uncontrolled interventions, small samples (<30 participants), surrogate primary outcomes, or unpublished data--yet 58% lacked the relevant cautions. The effects of press release quality on media coverage were not directly assessed. Press releases from academic medical centers often promote research that has uncertain relevance to human health and do not provide key facts or acknowledge important limitations. National Cancer Institute.

  16. Environmentally induced (co)variance in sperm and offspring phenotypes as a source of epigenetic effects.

    PubMed

    Marshall, Dustin J

    2015-01-01

    Traditionally, it has been assumed that sperm are a vehicle for genes and nothing more. As such, the only source of variance in offspring phenotype via the paternal line has been genetic effects. More recently, however, it has been shown that the phenotype or environment of fathers can affect the phenotype of offspring, challenging traditional theory with implications for evolution, ecology and human in vitro fertilisation. Here, I review sources of non-genetic variation in the sperm phenotype and evidence for co-variation between sperm and offspring phenotypes. I distinguish between two environmental sources of variation in sperm phenotype: the pre-release environment and the post-release environment. Pre-release, sperm phenotypes can vary within species according to male phenotype (e.g. body size) and according to local conditions such as the threat of sperm competition. Post-release, the physicochemical conditions that sperm experience, either when freely spawned or when released into the female reproductive tract, can further filter or modify sperm phenotypes. I find evidence that both pre- and post-release sperm environments can affect offspring phenotype; fertilisation is not a new beginning – rather, the experiences of sperm with the father and upon release can drive variation in the phenotype of the offspring. Interestingly, there was some evidence for co-variation between the stress resistance of sperm and the stress resistance of offspring, though more studies are needed to determine whether such effects are widespread. Overall, it appears that environmentally induced covariation between sperm and offspring phenotypes is non-negligible and further work is needed to determine their prevalence and strength. © 2015. Published by The Company of Biologists Ltd.

  17. Elucidating the Role of Carbon Sources on Abiotic and Biotic Release of Arsenic into Cambodian Aquifers

    NASA Astrophysics Data System (ADS)

    Koeneke, M.

    2017-12-01

    Arsenic (As) is a naturally occurring contaminant in Cambodia that has been contaminating well-water sources of millions of people. Commonly, studies look into the biotic factors that cause the arsenic to be released from aquifer sediments to groundwater. However, abiotic release of As from sediments, though little studied, may also play key roles in As contamination of well water. The goal of this research is to quantitatively compare organic-carbon mediated abiotic and biotic release of arsenic from sediments to groundwater. Batch anaerobic incubation experiments under abiotic (sodium azide used to immobilize microbes) and biotic conditions were conducted using Cambodian aquifer sediments, four different organic carbon sources (sodium lactate, sodium citrate, sodium oxalate, and humic acid), and six different carbon concentrations (0, 1, 2.5, 5, 10, 25mg C/L). Dissolved arsenic, iron(Fe), and manganese(Mn) concentrations in the treatments were measured 112 days . In addition, sediment and solution carbon solution was measured . Collectively, these show how different carbon sources, different carbon concentrations, and how abiotic and biotic factors impact the release of arsenic from Cambodian sediments into aquifers. Overall, an introduction of organic carbon to the soil increases the amount of As released from the sediment. The biotic + abiotic and abiotic conditions seemed to play a minimal role in the amount of As released. Dissolved species analysis showed us that 100% of the As was As(V), Our ICP-MS results vary due to the heterogeneity of samples, but when high levels are Fe are seen in solution, we also see high levels of As. We also see higher As concentrations when there is a smaller amount of Mn in solution.

  18. SU-E-T-333: Towards Customizable Radiotherapy Enhancement (CuRE) for Prostate Cancer Using Cisplatin Nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinha, N; Cifter, G; Sajo, E

    2014-06-01

    Purpose: Replacing routinely used brachytherapy spacers with multifunctional ones loaded with cisplatin nanoparticles (CNP), which can be released into the tumor after implantation, could enable customizable radiation boosting to the prostate tumor in addition to chemotherapy effect. This study investigates the feasibility of customizing the intra-tumor biodistribution and corresponding dose enhancement (DEF) over time for the released CNP as a function of nanoparticle size. Methods: Dose enhancement factors (DEF) due to photon-induced emission of photo-/Auger electrons from CNPs were calculated as a function of concentration using previously published analytical calculation method. An experimentally determined diffusion coefficient (D) for 10 nmmore » nanoparticles in mouse tumor model was employed to estimate D for other sizes using the Stoke- Einstein equation. The error function diffusion model in the experimental study was applied to generate the intra-tumor concentration profile for a burst release of CNPs from the spacer over time. The corresponding DEF profiles were then determined for brachytherapy using Pd-103 and I-125 sources. Results: As expected, the generated profiles showed greater DEF over time for smaller CNP sizes at sample distances from the spacer. For example, for a centrally located spacer, clinically significant DEF (> 20%) could be achieved near the tumor periphery (ca. 0.85 cm distance from the spacer for average PCa tumor size) after 20, and 100 days, respectively for CNPs sizes of 2 nm, and 10 nm, using I-125. Meanwhile for Pd-103, clinically significant DEF could be achieved at the same position after 22 and 108 days, respectively, for same size particles. Conclusion: Our preliminary results demonstrate the feasibility of customizing dose enhancement to prostate tumors as a function of spacer location, brachytherapy source type or size of CNPs released from multifunctional spacers. Such an approach could enable customizable radiation boosting to tumor sub-volumes, while minimizing dose to healthy tissues.« less

  19. The Distribution of Solar Wind Speeds During Solar Minimum: Calibration for Numerical Solar Wind Modeling Constraints on the Source of the Slow Solar Wind (Postprint)

    DTIC Science & Technology

    2012-03-05

    subsonic corona below the critical point, resulting in an increased scale height and mass flux, while keeping the kinetic energy of the flow fairly...Approved for public release; distribution is unlimited. tubes with small expansion factors the heating occurs in the supersonic corona, where the energy ...goes into the kinetic energy of the solar wind, increasing the flow speed [Leer and Holzer, 1980; Pneuman, 1980]. Using this model and a sim- plified

  20. Sensitivity of WRF-chem predictions to dust source function specification in West Asia

    NASA Astrophysics Data System (ADS)

    Nabavi, Seyed Omid; Haimberger, Leopold; Samimi, Cyrus

    2017-02-01

    Dust storms tend to form in sparsely populated areas covered by only few observations. Dust source maps, known as source functions, are used in dust models to allocate a certain potential of dust release to each place. Recent research showed that the well known Ginoux source function (GSF), currently used in Weather Research and Forecasting Model coupled with Chemistry (WRF-chem), exhibits large errors over some regions in West Asia, particularly near the IRAQ/Syrian border. This study aims to improve the specification of this critical part of dust forecasts. A new source function based on multi-year analysis of satellite observations, called West Asia source function (WASF), is therefore proposed to raise the quality of WRF-chem predictions in the region. WASF has been implemented in three dust schemes of WRF-chem. Remotely sensed and ground-based observations have been used to verify the horizontal and vertical extent and location of simulated dust clouds. Results indicate that WRF-chem performance is significantly improved in many areas after the implementation of WASF. The modified runs (long term simulations over the summers 2008-2012, using nudging) have yielded an average increase of Spearman correlation between observed and forecast aerosol optical thickness by 12-16 percent points compared to control runs with standard source functions. They even outperform MACC and DREAM dust simulations over many dust source regions. However, the quality of the forecasts decreased with distance from sources, probably due to deficiencies in the transport and deposition characteristics of the forecast model in these areas.

  1. Biodiesel presence in the source zone hinders aromatic hydrocarbons attenuation in a B20-contaminated groundwater

    NASA Astrophysics Data System (ADS)

    Ramos, Débora Toledo; Lazzarin, Helen Simone Chiaranda; Alvarez, Pedro J. J.; Vogel, Timothy M.; Fernandes, Marilda; do Rosário, Mário; Corseuil, Henry Xavier

    2016-10-01

    The behavior of biodiesel blend spills have received limited attention in spite of the increasing and widespread introduction of biodiesel to the transportation fuel matrix. In this work, a controlled field release of biodiesel B20 (100 L of 20:80 v/v soybean biodiesel and diesel) was monitored over 6.2 years to assess the behavior and natural attenuation of constituents of major concern (e.g., BTEX (benzene, toluene, ethyl-benzene and xylenes) and PAHs (polycyclic aromatic hydrocarbons)) in a sandy aquifer material. Biodiesel was preferentially biodegraded compared to diesel aromatic compounds with a concomitant increase in acetate, methane (near saturation limit (≈ 22 mg L- 1)) and dissolved BTEX and PAH concentrations in the source zone during the first 1.5 to 2.0 years after the release. Benzene and benzo(a)pyrene concentrations remained above regulatory limits in the source zone until the end of the experiment (6.2 years after the release). Compared to a previous adjacent 100-L release of ethanol-amended gasoline, biodiesel/diesel blend release resulted in a shorter BTEX plume, but with higher residual dissolved hydrocarbon concentrations near the source zone. This was attributed to greater persistence of viscous (and less mobile) biodiesel than the highly-soluble and mobile ethanol in the source zone. This persistence of biodiesel/diesel NAPL at the source zone slowed BTEX and PAH biodegradation (by the establishment of an anaerobic zone) but reduced the plume length by reducing mobility. This is the first field study to assess biodiesel/diesel blend (B20) behavior in groundwater and its effects on the biodegradation and plume length of priority groundwater pollutants.

  2. Biodiesel presence in the source zone hinders aromatic hydrocarbons attenuation in a B20-contaminated groundwater.

    PubMed

    Ramos, Débora Toledo; Lazzarin, Helen Simone Chiaranda; Alvarez, Pedro J J; Vogel, Timothy M; Fernandes, Marilda; do Rosário, Mário; Corseuil, Henry Xavier

    2016-10-01

    The behavior of biodiesel blend spills have received limited attention in spite of the increasing and widespread introduction of biodiesel to the transportation fuel matrix. In this work, a controlled field release of biodiesel B20 (100L of 20:80 v/v soybean biodiesel and diesel) was monitored over 6.2years to assess the behavior and natural attenuation of constituents of major concern (e.g., BTEX (benzene, toluene, ethyl-benzene and xylenes) and PAHs (polycyclic aromatic hydrocarbons)) in a sandy aquifer material. Biodiesel was preferentially biodegraded compared to diesel aromatic compounds with a concomitant increase in acetate, methane (near saturation limit (≈22mgL -1 )) and dissolved BTEX and PAH concentrations in the source zone during the first 1.5 to 2.0years after the release. Benzene and benzo(a)pyrene concentrations remained above regulatory limits in the source zone until the end of the experiment (6.2years after the release). Compared to a previous adjacent 100-L release of ethanol-amended gasoline, biodiesel/diesel blend release resulted in a shorter BTEX plume, but with higher residual dissolved hydrocarbon concentrations near the source zone. This was attributed to greater persistence of viscous (and less mobile) biodiesel than the highly-soluble and mobile ethanol in the source zone. This persistence of biodiesel/diesel NAPL at the source zone slowed BTEX and PAH biodegradation (by the establishment of an anaerobic zone) but reduced the plume length by reducing mobility. This is the first field study to assess biodiesel/diesel blend (B20) behavior in groundwater and its effects on the biodegradation and plume length of priority groundwater pollutants. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Evaluation of a Eulerian and Lagrangian air quality model using perfluorocarbon tracers released in Texas for the BRAVO haze study

    NASA Astrophysics Data System (ADS)

    Schichtel, Bret A.; Barna, Michael G.; Gebhart, Kristi A.; Malm, William C.

    The Big Bend Regional Aerosol and Visibility Observational (BRAVO) study was designed to determine the sources of haze at Big Bend National Park, Texas, using a combination of source and receptor models. BRAVO included an intensive monitoring campaign from July to October 1999 that included the release of perfluorocarbon tracers from four locations at distances 230-750 km from Big Bend and measured at 24 sites. The tracer measurements near Big Bend were used to evaluate the dispersion mechanisms in the REMSAD Eulerian model and the CAPITA Monte Carlo (CMC) Lagrangian model used in BRAVO. Both models used 36 km MM5 wind fields as input. The CMC model also used a combination of routinely available 80 and 190 km wind fields from the National Weather Service's National Centers for Environmental Prediction (NCEP) as input. A model's performance is limited by inherent uncertainties due to errors in the tracer concentrations and a model's inability to simulate sub-resolution variability. A range in the inherent uncertainty was estimated by comparing tracer data at nearby monitoring sites. It was found that the REMSAD and CMC models, using the MM5 wind field, produced performance statistics generally within this inherent uncertainty. The CMC simulation using the NCEP wind fields could reproduce the timing of tracer impacts at Big Bend, but not the concentration values, due to a systematic underestimation. It appears that the underestimation was partly due to excessive vertical dilution from high mixing depths. The model simulations were more sensitive to the input wind fields than the models' different dispersion mechanisms. Comparisons of REMSAD to CMC tracer simulations using the MM5 wind fields had correlations between 0.75 and 0.82, depending on the tracer, but the tracer simulations using the two wind fields in the CMC model had correlations between 0.37 and 0.5.

  4. Defining Munition Constituent (MC) Source Terms in Aquatic Environments on DoD Ranges

    DTIC Science & Technology

    2013-01-01

    Civil Engineering and Mechanics 5622 Hull Street University of Wisconsin-Milwaukee San Diego, CA 92152–5001...release. SSC Pacific San Diego, CA 92152-5001 Technical Report 1999 January 2013 Defining Munition Constituent (MC) Source...Wisconsin-Milwaukee Approved for public release. SSC Pacific San Diego, CA 92152-5001 SB SSC Pacific San Diego

  5. Mercury source sector asssessment for the Greater Milwaukee Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obenauf, P.; Skavroneck, S.

    1997-09-01

    The Mercury Reduction Project for the Greater Milwaukee Area is a joint effort of the Pollution Prevention Partnership, Milwaukee Metropolitan Seweage District (MMSD) and Wisconsin Department of Natural Resources. Estimates of the amounts of mercury present, used and/or annually released to air, land and water within the MMSD service area are provided for 25 source sectors. This 420 square mile area (including Milwaukee County and parts of Waukesha, Racine, Ozaukee and Washington Counties) is home to just over 1 million people. The tables and figures summarize the relative amounts of mercury: annually released from purposeful uses; annually released due tomore » trace impurities; and present or in use from the various source sectors in the Greater Milwaukee Area.« less

  6. Simulated Carbon Cycling in a Model Microbial Mat.

    NASA Astrophysics Data System (ADS)

    Decker, K. L.; Potter, C. S.

    2006-12-01

    We present here the novel addition of detailed organic carbon cycling to our model of a hypersaline microbial mat ecosystem. This ecosystem model, MBGC (Microbial BioGeoChemistry), simulates carbon fixation through oxygenic and anoxygenic photosynthesis, and the release of C and electrons for microbial heterotrophs via cyanobacterial exudates and also via a pool of dead cells. Previously in MBGC, the organic portion of the carbon cycle was simplified into a black-box rate of accumulation of simple and complex organic compounds based on photosynthesis and mortality rates. We will discuss the novel inclusion of fermentation as a source of carbon and electrons for use in methanogenesis and sulfate reduction, and the influence of photorespiration on labile carbon exudation rates in cyanobacteria. We will also discuss the modeling of decomposition of dead cells and the ultimate release of inorganic carbon. The detailed modeling of organic carbon cycling is important to the accurate representation of inorganic carbon flux through the mat, as well as to accurate representation of growth models of the heterotrophs under different environmental conditions. Because the model ecosystem is an analog of ancient microbial mats that had huge impacts on the atmosphere of early earth, this MBGC can be useful as a biological component to either early earth models or models of other planets that potentially harbor life.

  7. An Active Englacial Hydrological System in a Cold Glacier: Blood Falls, Taylor Glacier, Antarctica

    NASA Astrophysics Data System (ADS)

    Carr, C. G.; Pettit, E. C.; Carmichael, J.; Badgeley, J.; Tulaczyk, S. M.; Lyons, W. B.; Mikucki, J.

    2016-12-01

    Blood Falls is a supraglacial hydrological feature formed by episodic release of iron-rich subglacial brine derived from an extensive aquifer beneath the cold, polar, Taylor Glacier. While fluid transport in non-temperate ice typically occurs through meltwater delivery from the glacier surface to the bed (hydrofracturing, supraglacial lake drainage), Blood Falls represents the opposite situation: brine moves from a subglacial source to the glacier surface. Here, we present the first complete conceptual model for brine transport and release, as well as the first direct evidence of a wintertime brine release at Blood Falls obtained through year-round time-lapse photography. Related analyses show that brine pools subglacially underneath the northern terminus of Taylor Glacier, rather than flowing directly into proglacial Lake Bonney because ice-cored moraines and channelized surface topography provide hydraulic barriers. This pooled brine is pressurized by hydraulic head from the upglacier brine source region. Based on seismic data, we propose that episodic supraglacial release is initiated by high strain rates coupled with pressurized subglacial brine that drive intermittent subglacial and englacial fracturing. Ultimately, brine-filled basal crevasses propagate upward to link with surface crevasses, allowing brine to flow from the bed to the surface. The observation of wintertime brine release indicates that surface-generated meltwater is not necessary to trigger crack propagation or to maintain the conduit as previously suggested. The liquid brine persists beneath and within the cold ice (-17°C) despite ambient ice/brine temperature differences of as high as 10°C through both locally depressed brine freezing temperatures through cryoconcentration of salts and increased ice temperatures through release of latent heat during partial freezing of brine. The existence of an englacial hydrological system initiated by basal crevassing extends to polar glaciers a process thought limited to temperate glaciers and confirms that supraglacial, englacial, and subglacial hydrological systems act in concert to provide critical forcing on glacier dynamics, even in cold polar ice.

  8. The 2016 M7.8 Kaikōura earthquake revealed by multiple seismic wavefield simulations: slow rupture propagation on a geometrically complex fault network

    NASA Astrophysics Data System (ADS)

    Kaneko, Y.; Francois-Holden, C.; Hamling, I. J.; D'Anastasio, E.; Fry, B.

    2017-12-01

    The 2016 M7.8 Kaikōura (New Zealand) earthquake generated ground motions over 1g across a 200-km long region, resulted in multiple onshore and offshore fault ruptures, a profusion of triggered landslides, and a regional tsunami. Here we examine the rupture evolution during the Kaikōura earthquake multiple kinematic modelling methods based on local strong-motion and high-rate GPS data. Our kinematic models constrained by near-source data capture, in detail, a complex pattern of slowly (Vr < 2km/s) propagating rupture from the south to north, with over half of the moment release occurring in the northern source region, mostly on the Kekerengu fault, 60 seconds after the origin time. Interestingly, both models indicate rupture re-activation on the Kekerengu fault with the time separation of 11 seconds. We further conclude that most near-source waveforms can be explained by slip on the crustal faults, with little (<8%) or no contribution from the subduction interface.

  9. Reaction Wheel Disturbance Modeling, Jitter Analysis, and Validation Tests for Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Liu,Kuo-Chia; Maghami, Peiman; Blaurock, Carl

    2008-01-01

    The Solar Dynamics Observatory (SDO) aims to study the Sun's influence on the Earth by understanding the source, storage, and release of the solar energy, and the interior structure of the Sun. During science observations, the jitter stability at the instrument focal plane must be maintained to less than a fraction of an arcsecond for two of the SDO instruments. To meet these stringent requirements, a significant amount of analysis and test effort has been devoted to predicting the jitter induced from various disturbance sources. One of the largest disturbance sources onboard is the reaction wheel. This paper presents the SDO approach on reaction wheel disturbance modeling and jitter analysis. It describes the verification and calibration of the disturbance model, and ground tests performed for validating the reaction wheel jitter analysis. To mitigate the reaction wheel disturbance effects, the wheels will be limited to operate at low wheel speeds based on the current analysis. An on-orbit jitter test algorithm is also presented in the paper which will identify the true wheel speed limits in order to ensure that the wheel jitter requirements are met.

  10. Fluoride release and recharge behavior of a nano-filled resin-modified glass ionomer compared with that of other fluoride releasing materials.

    PubMed

    Mitra, Sumita B; Oxman, Joe D; Falsafi, Afshin; Ton, Tiffany T

    2011-12-01

    To compare the long-term fluoride release kinetics of a novel nano-filled two-paste resin-modified glass-ionomer (RMGI), Ketac Nano (KN) with that of two powder-liquid resin-modified glass-ionomers, Fuji II LC (FLC) and Vitremer (VT) and one conventional glass-ionomer, Fuji IX (FIX). Fluoride release was measured in vitro using ion-selective electrodes. Kinetic analysis was done using regression analysis and compared with existing models for GIs and compomers. In a separate experiment the samples of KN and two conventional glass-ionomers, FIX and Ketac Molar (KM) were subjected to a treatment with external fluoride source (Oral-B Neutra-Foam) after 3 months of fluoride release and the recharge behavior studied for an additional 7-day period. The cumulative amount of fluoride released from KN, VT and FLC and the release profiles were statistically similar but greater than that for FIX at P < 0.05. All four materials, including KN, showed a burst of fluoride ions at shorter times (t) and an overall rate dependence on t1/2 typical for glass-ionomers. The coating of KN with its primer and of DY with its adhesive did not significantly alter the fluoride release behavior of the respective materials. The overall rate for KN was significantly higher than for the compomer DY. DY showed a linear rate of release vs. t and no burst effect as expected for compomers. The nanoionomer KN showed fluoride recharge behavior similar to the conventional glass ionomers FIX and KM. Thus, it was concluded that the new RMGI KN exhibits fluoride ion release behavior similar to typical conventional and RMGIs and that the primer does not impede the release of fluoride.

  11. Integration for Airborne Dust Prediction Systems and Vegetation Phenology to Track Pollen for Asthma Alerts in Public Health Decision Support Systems

    NASA Technical Reports Server (NTRS)

    Luvall, J. C.; Sprigg, W. A.; Nickovic, S.; Huete, A.; Budge, A.; Flowers, L.

    2008-01-01

    The objective of the program is to assess the feasibility of combining a dust transport model with MODIS derived phenology to study pollen transport for integration with a public health decision support system. The use of pollen information has specifically be identified as a critical need by the New Mexico State Health department for inclusion in the Environmental Public Health Tracking (EPHT) program. Material and methods: Pollen can be transported great distances. Local observations of plan phenology may be consistent with the timing and source of pollen collected by pollen sampling instruments. The Dust REgional Atmospheric Model (DREAM) is an integrated modeling system designed to accurately describe the dust cycle in the atmosphere. The dust modules of the entire system incorporate the state of the art parameterization of all the major phases of the atmospheric dust life such as production, diffusion, advection, and removal. These modules also include effects of the particles size distribution on aerosol dispersion. The model was modified to use pollen sources instead of dust. Pollen release was estimated based on satellite-derived phenology of key plan species and vegetation communities. The MODIS surface reflectance product (MOD09) provided information on the start of the plant growing season, growth stage, and pollen release. The resulting deterministic model is useful for predicting and simulating pollen emission and downwind concentration to study details of phenology and meteorology and their dependencies. The proposed linkage in this project provided critical information on the location timing and modeled transport of pollen directly to the EPHT> This information is useful to support the centers for disease control and prevention (CDC)'s National EPHT and the state of New Mexico environmental public health decision support for asthma and allergies alerts.

  12. Numerical Experiments Investigating the Source of Explosion S-Waves

    DTIC Science & Technology

    2007-09-01

    simulations in this study are based on the well-recorded 1993 Nonproliferation experiment (NPE) ( chemical kiloton). A regional 3-dimensional model...1-kiloton chemical explosion at the NTS. NPE details and research reports can be found in Denny and Stull (1994). Figure 3 shows the extensive...T., D. Helmberger, and G. Engen (1985). Evidence for tectonic release from underground nuclear explosions in long period S waves, Bull. Seismol. Soc

  13. Model-Free Stochastic Localization of CBRN Releases

    DTIC Science & Technology

    2013-01-01

    Ioannis Ch. Paschalidis,‡ Senior Member, IEEE Abstract—We present a novel two-stage methodology for locating a Chemical, Biological, Radiological, or...Nuclear (CBRN) source in an urban area using a network of sensors. In contrast to earlier work, our approach does not solve an inverse dispersion problem...but relies on data obtained from a simulation of the CBRN dispersion to obtain probabilistic descriptors of sensor measurements under a variety of CBRN

  14. How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach

    DTIC Science & Technology

    2009-12-01

    Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of

  15. Source apportionment of ambient non-methane hydrocarbons in Hong Kong: application of a principal component analysis/absolute principal component scores (PCA/APCS) receptor model.

    PubMed

    Guo, H; Wang, T; Louie, P K K

    2004-06-01

    Receptor-oriented source apportionment models are often used to identify sources of ambient air pollutants and to estimate source contributions to air pollutant concentrations. In this study, a PCA/APCS model was applied to the data on non-methane hydrocarbons (NMHCs) measured from January to December 2001 at two sampling sites: Tsuen Wan (TW) and Central & Western (CW) Toxic Air Pollutants Monitoring Stations in Hong Kong. This multivariate method enables the identification of major air pollution sources along with the quantitative apportionment of each source to pollutant species. The PCA analysis identified four major pollution sources at TW site and five major sources at CW site. The extracted pollution sources included vehicular internal engine combustion with unburned fuel emissions, use of solvent particularly paints, liquefied petroleum gas (LPG) or natural gas leakage, and industrial, commercial and domestic sources such as solvents, decoration, fuel combustion, chemical factories and power plants. The results of APCS receptor model indicated that 39% and 48% of the total NMHCs mass concentrations measured at CW and TW were originated from vehicle emissions, respectively. 32% and 36.4% of the total NMHCs were emitted from the use of solvent and 11% and 19.4% were apportioned to the LPG or natural gas leakage, respectively. 5.2% and 9% of the total NMHCs mass concentrations were attributed to other industrial, commercial and domestic sources, respectively. It was also found that vehicle emissions and LPG or natural gas leakage were the main sources of C(3)-C(5) alkanes and C(3)-C(5) alkenes while aromatics were predominantly released from paints. Comparison of source contributions to ambient NMHCs at the two sites indicated that the contribution of LPG or natural gas at CW site was almost twice that at TW site. High correlation coefficients (R(2) > 0.8) between the measured and predicted values suggested that the PCA/APCS model was applicable for estimation of sources of NMHCs in ambient air.

  16. MELCOR computer code manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less

  17. The Effect of Neural Noise on Spike Time Precision in a Detailed CA3 Neuron Model

    PubMed Central

    Kuriscak, Eduard; Marsalek, Petr; Stroffek, Julius; Wünsch, Zdenek

    2012-01-01

    Experimental and computational studies emphasize the role of the millisecond precision of neuronal spike times as an important coding mechanism for transmitting and representing information in the central nervous system. We investigate the spike time precision of a multicompartmental pyramidal neuron model of the CA3 region of the hippocampus under the influence of various sources of neuronal noise. We describe differences in the contribution to noise originating from voltage-gated ion channels, synaptic vesicle release, and vesicle quantal size. We analyze the effect of interspike intervals and the voltage course preceding the firing of spikes on the spike-timing jitter. The main finding of this study is the ranking of different noise sources according to their contribution to spike time precision. The most influential is synaptic vesicle release noise, causing the spike jitter to vary from 1 ms to 7 ms of a mean value 2.5 ms. Of second importance was the noise incurred by vesicle quantal size variation causing the spike time jitter to vary from 0.03 ms to 0.6 ms. Least influential was the voltage-gated channel noise generating spike jitter from 0.02 ms to 0.15 ms. PMID:22778784

  18. Radioxenon detections in the CTBT international monitoring system likely related to the announced nuclear test in North Korea on February 12, 2013.

    PubMed

    Ringbom, A; Axelsson, A; Aldener, M; Auer, M; Bowyer, T W; Fritioff, T; Hoffman, I; Khrustalev, K; Nikkinen, M; Popov, V; Popov, Y; Ungar, K; Wotawa, G

    2014-02-01

    Observations made in April 2013 of the radioxenon isotopes (133)Xe and (131m)Xe at measurement stations in Japan and Russia, belonging to the International Monitoring System for verification of the Comprehensive Nuclear-Test-Ban Treaty, are unique with respect to the measurement history of these stations. Comparison of measured data with calculated isotopic ratios as well as analysis using atmospheric transport modeling indicate that it is likely that the xenon measured was created in the underground nuclear test conducted by North Korea on February 12, 2013, and released 7-8 weeks later. More than one release is required to explain all observations. The (131m)Xe source terms for each release were calculated to 0.7 TBq, corresponding to about 1-10% of the total xenon inventory for a 10 kt explosion, depending on fractionation and release scenario. The observed ratios could not be used to obtain any information regarding the fissile material that was used in the test. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Releasable Asbestos Field Sampler (RAFS) Operation Manual

    EPA Science Inventory

    The Releasable Asbestos Field Sampler (RAFS) is a field instrument that provides an in-situ measurement of asbestos releasability from consistent and reproducible mechanical agitation of the source material such as soil. The RAFS was designed to measure concentration (asbestos st...

  20. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    NASA Astrophysics Data System (ADS)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  1. Low-frequency source parameters of twelve large earthquakes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Harabaglia, Paolo

    1993-01-01

    A global survey of the low-frequency (1-21 mHz) source characteristics of large events are studied. We are particularly interested in events unusually enriched in low-frequency and in events with a short-term precursor. We model the source time function of 12 large earthquakes using teleseismic data at low frequency. For each event we retrieve the source amplitude spectrum in the frequency range between 1 and 21 mHz with the Silver and Jordan method and the phase-shift spectrum in the frequency range between 1 and 11 mHz with the Riedesel and Jordan method. We then model the source time function by fitting the two spectra. Two of these events, the 1980 Irpinia, Italy, and the 1983 Akita-Oki, Japan, are shallow-depth complex events that took place on multiple faults. In both cases the source time function has a length of about 100 seconds. By comparison Westaway and Jackson find 45 seconds for the Irpinia event and Houston and Kanamori about 50 seconds for the Akita-Oki earthquake. The three deep events and four of the seven intermediate-depth events are fast rupturing earthquakes. A single pulse is sufficient to model the source spectra in the frequency range of our interest. Two other intermediate-depth events have slower rupturing processes, characterized by a continuous energy release lasting for about 40 seconds. The last event is the intermediate-depth 1983 Peru-Ecuador earthquake. It was first recognized as a precursive event by Jordan. We model it with a smooth rupturing process starting about 2 minutes before the high frequency origin time superimposed to an impulsive source.

  2. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  3. Tracking the MSL-SAM methane detection source location Through Mars Regional Atmospheric Modeling System (MRAMS)

    NASA Astrophysics Data System (ADS)

    Pla-García, Jorge

    2016-04-01

    1. Introduction: The putative in situ detection of methane by Sample Analysis at Mars (SAM) instrument suite on Curiosi-ty at Gale crater has garnered significant attention because of the potential implications for the presence of geological methane sources or indigenous Martian organisms [1, 2]. SAM reported detection of back-ground levels of atmospheric methane of mean value 0.69±0.25 parts per billion by volume (ppbv) at the 95% confidence interval (CI). Additionally, in four sequential measurements spanning a 60-sol period, SAM observed elevated levels of methane of 7.2±2.1 ppbv (95% CI), implying that Mars is episodically producing methane from an additional unknown source. There are many major unresolved questions regard-ing this detection: 1) What are the potential sources of the methane release? 2) What causes the rapid decrease in concentration? and 3) Where is the re-lease location? 4) How spatially extensive is the re-lease? 5) For how long is CH4 released? Regarding the first question, the source of methane, is so far not identified. It could be related with geo-logical process like methane release from clathrates [3], serpentinisation [4] and volcanism [5]; or due to biological activity from methanogenesis [6]. To answer the second question, the rapid decrease in concentration, it is important to note that the photo-chemical lifetime of methane is of order 100 years, much longer than the atmospheric mixing time scale, and thus the gas should tend to be well mixed except near a source or shortly after an episodic release. The observed spike of 7 ppb from the background of <1 ppb, and then the rapid return to the background lev-el could be due to a sink (destruction) or due to at-mospheric mixing. A wind mediated erosion process of ordinary quartz crystals was proposed to produce activated quartz grains, which sequester methane by forming covalent Si-C bonds. If this process is op-erational on Mars today, which some recent prelimi-nary studies on olivine indicate could be the case, then it might explain the observed fast destruction of methane [7]. In an effort to better address the potential mixing and remaining questions, atmospheric circulation studies of Gale Crater were performed with the Mars Re-gional Atmospheric Modeling System (MRAMS). The model was focused on rover locations using nested grids with a spacing of 330 meters on the in-nermost grid that is centered over the landing [8, 9]. MRAMS is ideally suited for this investigation; the model is explicitly designed to simulate Mars' at-mospheric circulations at the mesoscale and smaller with realistic, high-resolution surface properties [10, 11]. In order to characterize seasonal mixing changes throughout the Martian year, simulations were con-ducted at Ls 0, 90, 180 and 270. Two additional sim-ulations at Ls 225 and 315 were explored to better understand the unique meteorological setting cen-tered around Ls 270. Ls 270 was shown to be an anomalous season when air within and outside the crater was well mixed by strong, flushing, northerly flow and large amplitude breaking mountain waves: air flowing downslope at night is cold enough to penetrate all the way to the surface. At other seasons, the air in the crater is more isolated -but not com-pletely- from the surrounding environment: mesoscale simulations indicate that the air flowing down the crater rims does not easily make it to the crater floor. Instead, the air encounters very cold and stable air pooled in the bottom of the crater, which forces the air to glide right over the colder, more dense air below. Thus, the mixing of near-surface crater air with the external environment is potentially more limited than around Ls 270. 2. Tracking methane source location The rise in concentration was reported to start around sol 300 (˜Ls 336), peaked shortly after sol 520 (˜Ls 82), and then dropped to background val-ues prior to sol 575 (˜Ls 103). Two scenarios are considered in the context of the circulations predicted by MRAMS. The first scenario is the release of methane from somewhere outside the crater. The second is a release of methane within the crater. In both cases, the release is assumed to take place near the season when the rise of concen-tration was first noted (˜Ls 336). This is a transition-al time at Gale Crater, when the flushing winds are giving way to the more isolated crater scenario: In the situation where the release was outside the crater, the experiment assumes a uniform, elevated abundance of CH4 outside the crater, and mixing should be sufficient to bring the crater methane abundance to something close to the larger-scale environmental value. As the crater becomes more isolated with time, the methane abundance in the crater will begin to lag whatever the value is outside the crater. If the release was far from the crater, the external ˜7 ppbv value might be expected to slowly decrease as the methane becomes increasingly well-mixed on a global scale, and as some of that air mix-es slowly into the crater. For the elevated methane levels in the crater to drop rapidly back to back-ground levels, at least two things would need to hap-pen. First, the external crater environment would have to drop at least as rapidly to the background levels. This seems possible only if there is very deep mixing that spreads the release through a very large volume of atmosphere, or if a rapid destruction mechanism is invoked. The second thing that would have to happen is that the crater air would have to mix nearly completely with the external crater air. The model results at Ls 90, which bounds the period between the observed peak and the return to the background levels, may be supportive of this idea. However, while mixing seems limited, it may still be possible that the mixing degree and time scale is sufficient to affect the necessary change. In the second scenario, the release is assumed to be within the crater. In this case, some mixing of this air with external crater air at background values can be assumed. Depending on the rate of mixing, it is pos-sible that the value could decay to the background levels in the given time. Thus, from a mixing stand-point, the second scenario seems at least plausible. Some preliminary work, including tracer gases into the model, is being performed to establish the amount of mixing during the limited mixing epochs. Preliminary results may support the idea that during periods of limited mixing, there could be enough time for methane to bind to activated mineral surfac-es through wind erosion. More work is needed to establish the amount of mixing and to de-termine which scenario is more likely. References: [1] Webster et al. (2013), LPI contributions, 1719: 1366; [2] Webster et al. (2015), Science, vol. 347, no. 6220, 415-417; [3] Chastain and Chevrier (2007). Planet. Space Science, 55, 1246-1256; [4] Oze and Sharma (2005). Geophys. Res. Lett., 32, L10203; [5] Etiope et al. (2007), J. Volcanol. Geo-therm. Res., 165, 76-86; [6] Reid et al. (2006), Int. J. Astrobiol., 5, 89-97; [7] Jensen et al. (2014), Icarus, 236, 24-27; [8] Rafkin, S. C. R. et al. (2001), Icarus, 151, 228-256;?[9] Rafkin, S. C. R. et al. (2002), Na-ture, 419, 697-699. [10] Pla-Garcia et al. (2016), Icarus, Accepted; [11] Rafkin S.C.R. et al. (2016), Icarus, Accepted

  4. Impact of RO-desalted water on distribution water qualities.

    PubMed

    Taylor, J; Dietz, J; Randall, A; Hong, S

    2005-01-01

    A large-scale pilot distribution study was conducted to investigate the impacts of blending different source waters on distribution water qualities, with an emphasis on metal release (i.e. corrosion). The principal source waters investigated were conventionally treated ground water (G1), surface water processed by enhanced treatment (S1), and desalted seawater by reverse osmosis membranes (RO). Due to the nature of raw water quality and associated treatment processes, G1 water had high alkalinity, while S1 and RO sources were characterized as high sulfate and high chloride waters, respectively. The blending ratio of different treated waters determined the quality of finished waters. Iron release from aged cast iron pipes increased significantly when exposed to RO and S1 waters: that is, the greater iron release was experienced with alkalinity reduced below the background of G1 water. Copper release to drinking water, however, increased with increasing alkalinity and decreasing pH. Lead release, on the other hand, increased with increasing chloride and decreasing sulfate. The effect of pH and alkalinity on lead release was not clearly observed from pilot blending study. The flat and compact corrosion scales observed for lead surface exposed to S1 water may be attributable to lead concentration less than that of RO water blends.

  5. Source Term Estimates of Radioxenon Released from the BaTek Medical Isotope Production Facility Using External Measured Air Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Cameron, Ian M.; Dumais, Johannes R.

    2015-10-01

    Abstract Batan Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies 99mTc for use in medical procedures. Atmospheric releases of Xe-133 in the production process at BaTek are known to influence the measurements taken at the closest stations of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The xenon isotopes released from BaTek are the same as those produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analystsmore » trying to decide whether a specific measurement result came from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84E13 Bq of Xe-133. Concentrations of Xe-133 in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88E13 Bq. The same optimization process yielded a release estimate of 1.70E13 Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10 percent of each other. Weekly release estimates of 1.8E13 Bq and a 40 percent facility operation rate yields a rough annual release estimate of 3.7E13 Bq of Xe-133. This value is consistent with previously published estimates of annual releases for this facility, which are based on measurements at three IMS stations. These multiple lines of evidence cross-validate the stack release estimates and the release estimates from atmospheric samplers.« less

  6. The influence of cage conditioning on the performance and behavior of Japanese flounder reared for stock enhancement: Burying, feeding, and threat response

    NASA Astrophysics Data System (ADS)

    Walsh, Michelle L.; Masuda, Reiji; Yamashita, Yoh

    2014-01-01

    Flatfish reared for stock enhancement often exhibit irregular behavioral patterns compared with wild conspecifics. These “deficits”, mostly attributed to the unnatural characteristics of the hatchery environment, are assumed to translate to increased predation risk. Initially releasing fish in predator-free conditioning cages may help flatfish adjust to the wild environment, establish burial skills, begin pigment change, recover from transport stress, and experience natural (live) food sources before full release into the wild. However, the impact of cage conditioning on the performance and behavior of flatfish has yet to be fully assessed. We conducted video trials with 10-cm, hatchery-reared Japanese flounder, Paralichthys olivaceus, in sand-bottomed aquaria to assess four treatments of flounder: (1) reared fish cage conditioned for 7 d in the shallow coast, (2) reared fish directly from hatchery tanks, (3) wild fish, and (4) reared fish released directly from hatchery tanks into the wild and then recaptured after 6 d at large. Burying ability, predation, and threat response to a model predator were examined. Wild fish buried most, followed by cage conditioned, and released-then-recaptured and non-conditioned (directly from tank) fish. Wild and conditioned fish revealed much lower variation in total movement duration, which corresponded with lower levels and variation in prey vertical movement. Fish of all condition types exhibited a lower number of attacks and off-bottom swimming events, and a lower movement duration when the model predator was in motion versus when it was still. This study is the first to evaluate the behavioral mechanisms of hatchery-reared flatfish that have been cage-conditioned or released-then-recaptured. In addition, we provide evidence that cage conditioning can enhance the performance of released flatfish.

  7. Empirical and mechanistic evaluation of NH4(+) release kinetic in calcareous soils.

    PubMed

    Ranjbar, F; Jalali, M

    2014-05-01

    Release, fixation, and distribution of ammonium (NH4(+)) as a source of nitrogen can play an important role in soil fertility and plant nutrition. In this study, ten surface soils, after addition of 1,000 mg NH4(+) kg(-1,) were incubated for 1 week at the field capacity moisture and 25 ± 2 °C temperature, and then NH4(+) release kinetic was investigated by sequential extractions with 10 mM CaCl2. Furthermore, NH4(+) distribution among three fractions, including water-soluble, exchangeable, and non-exchangeable, was determined in all soil samples. NH4(+) release was initially rapid followed by a slower reaction, and this was described well with the Elovich equation as an empirical model. The cumulative NH4(+) concentration released in spiked soil samples had a positive significant correlation with sand content and negative ones with pH, exchangeable Ca(2+)m and K(+), cation exchange capacity (CEC), equivalent calcium carbonate (ECC), and clay content. The cation exchange model in the PHREEQC program was successful in mechanistic simulation of the release trend of native and added NH4(+) in all control and spiked soil samples. The results of fractionation experiments showed that the non-exchangeable fraction in control and spiked soil samples was greater than that in water-soluble and exchangeable fractions. Soil properties, such as pH, exchangeable Ca(2+) and K(+), CEC, ECC, and contents of sand and clay, had significant influences on the distribution of NH4(+) among three measured fractions. This study indicated that both native and recently fixed NH4(+), added to soil through the application of fertilizers, were readily available for plant roots during 1 week after exposure.

  8. A refined method for calculating equivalent effective stratospheric chlorine

    NASA Astrophysics Data System (ADS)

    Engel, Andreas; Bönisch, Harald; Ostermöller, Jennifer; Chipperfield, Martyn P.; Dhomse, Sandip; Jöckel, Patrick

    2018-01-01

    Chlorine and bromine atoms lead to catalytic depletion of ozone in the stratosphere. Therefore the use and production of ozone-depleting substances (ODSs) containing chlorine and bromine is regulated by the Montreal Protocol to protect the ozone layer. Equivalent effective stratospheric chlorine (EESC) has been adopted as an appropriate metric to describe the combined effects of chlorine and bromine released from halocarbons on stratospheric ozone. Here we revisit the concept of calculating EESC. We derive a refined formulation of EESC based on an advanced concept of ODS propagation into the stratosphere and reactive halogen release. A new transit time distribution is introduced in which the age spectrum for an inert tracer is weighted with the release function for inorganic halogen from the source gases. This distribution is termed the release time distribution. We show that a much better agreement with inorganic halogen loading from the chemistry transport model TOMCAT is achieved compared with using the current formulation. The refined formulation shows EESC levels in the year 1980 for the mid-latitude lower stratosphere, which are significantly lower than previously calculated. The year 1980 is commonly used as a benchmark to which EESC must return in order to reach significant progress towards halogen and ozone recovery. Assuming that - under otherwise unchanged conditions - the EESC value must return to the same level in order for ozone to fully recover, we show that it will take more than 10 years longer than estimated in this region of the stratosphere with the current method for calculation of EESC. We also present a range of sensitivity studies to investigate the effect of changes and uncertainties in the fractional release factors and in the assumptions on the shape of the release time distributions. We further discuss the value of EESC as a proxy for future evolution of inorganic halogen loading under changing atmospheric dynamics using simulations from the EMAC model. We show that while the expected changes in stratospheric transport lead to significant differences between EESC and modelled inorganic halogen loading at constant mean age, EESC is a reasonable proxy for modelled inorganic halogen on a constant pressure level.

  9. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  10. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  11. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  12. Precipitation Processes Derived from TRMM Satellite Data, Cloud Resolving Model and Field Campaigns

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Lang, S.; Simpson, J.; Meneghini, R.; Halverson, J.; Johnson, R.; Adler, R.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Rainfall is a key link in the hydrologic cycle and is a primary heat source for the atmosphere. The vertical distribution of latent-heat release, which is accompanied by rainfall, modulates the large-scale circulations of the tropics and in turn can impact midlatitude weather. This latent heat release is a consequence of phase changes between vapor, liquid. and solid water. Present large-scale weather and climate models can simulate cloud latent heat release only crudely thus reducing their confidence in predictions on both global and regional scales. In this paper, NASA Tropical Rainfall Measuring (TRMM) precipitation radar (PR) derived rainfall information and the Goddard Convective and Stratiform Heating (CSH) algorithm used to estimate the four-dimensional structure of global monthly latent heating and rainfall profiles over the global tropics from December 1997 to October 2000. Rainfall latent heating and radar reflectively structure between ENSO (1997-1998 winter) and non-ENSO (1998-1999 winter) periods are examined and compared. The seasonal variation of heating over various geographic locations (i.e. Indian ocean vs west Pacific; Africa vs S. America) are also analyzed. In addition, the relationship between rainfall latent heating maximum heating level), radar reflectively and SST are examined.

  13. Derivation of risk indices and analysis of variablility for the management of incidents involving the transport of nuclear materials in the Northern Seas.

    PubMed

    Brown, J; Hosseini, A; Karcher, M; Kauker, F; Dowdall, M; Schnur, R; Strand, P

    2016-04-15

    The transport of nuclear or radioactive materials and the presence of nuclear powered vessels pose risks to the Northern Seas in terms of potential impacts to man and environment as well socio-economic impacts. Management of incidents involving actual or potential releases to the marine environment are potentially difficult due to the complexity of the environment into which the release may occur and difficulties in quantifying risk to both man and environment. In order to address this, a state of the art oceanographic model was used to characterize the underlying variability for a specific radionuclide release scenario. The resultant probabilistic data were used as inputs to transfer and dose models providing an indication of potential impacts for man and environment This characterization was then employed to facilitate a rapid means of quantifying risk to man and the environment that included and addressed this variability. The radionuclide specific risk indices derived can be applied by simply multiplying the reported values by the magnitude of the source term and thereafter summing over all radionuclides to provide an indication of total risk. Copyright © 2016. Published by Elsevier Ltd.

  14. Chlorine truck attack consequences and mitigation.

    PubMed

    Barrett, Anthony Michael; Adams, Peter J

    2011-08-01

    We develop and apply an integrated modeling system to estimate fatalities from intentional release of 17 tons of chlorine from a tank truck in a generic urban area. A public response model specifies locations and actions of the populace. A chemical source term model predicts initial characteristics of the chlorine vapor and aerosol cloud. An atmospheric dispersion model predicts cloud spreading and movement. A building air exchange model simulates movement of chlorine from outdoors into buildings at each location. A dose-response model translates chlorine exposures into predicted fatalities. Important parameters outside defender control include wind speed, atmospheric stability class, amount of chlorine released, and dose-response model parameters. Without fast and effective defense response, with 2.5 m/sec wind and stability class F, we estimate approximately 4,000 (half within ∼10 minutes) to 30,000 fatalities (half within ∼20 minutes), depending on dose-response model. Although we assume 7% of the population was outdoors, they represent 60-90% of fatalities. Changing weather conditions result in approximately 50-90% lower total fatalities. Measures such as sheltering in place, evacuation, and use of security barriers and cryogenic storage can reduce fatalities, sometimes by 50% or more, depending on response speed and other factors. © 2011 Society for Risk Analysis.

  15. Isotopic composition of Murchison organic compounds: Intramolecular carbon isotope fractionation of acetic acid. Simulation studies of cosmochemical organic syntheses

    NASA Technical Reports Server (NTRS)

    Yuen, G. U.; Cronin, J. R.; Blair, N. E.; Desmarais, D. J.; Chang, S.

    1991-01-01

    Recently, in our laboratories, samples of Murchison acetic acid were decarboxylated successfully and the carbon isotopic composition was measured for the methane released by this procedure. These analyses showed significant differences in C-13/C-12 ratios for the methyl and carboxyl carbons of the acetic acid molecule, strongly suggesting that more than one carbon source may be involved in the synthesis of the Murchison organic compounds. On the basis of this finding, laboratory model systems simulating cosmochemical synthesis are being studied, especially those processes capable of involving two or more starting carbon sources.

  16. Apparent Explosion Moments from Rg Waves Recorded on SPE: Implications for the Late-Time Damage Source Model

    NASA Astrophysics Data System (ADS)

    Patton, H. J.; Larmat, C. S.; Rougier, E.

    2016-12-01

    Seismic moments for chemical shots making up Phase I of the Source Physics Experiments (SPE) are estimated from 6 Hz Rg waves under the assumption that the shots are pure explosions. These apparent explosion moments are compared to moments determined using the Reduced Displacement Potential (RDP) method applied to free field data. LIDAR/photogrammetry observations, strong ground motions on the free surface near ground zero, and moment tensor inversion results are evidence in support of the fourth shot SPE-4P being essentially a pure explosion. The apparent moment for SPE-4P is 9 × 1010 Nm in good agreement with the RDP moment 8 × 1010 Nm. In stark contrast, apparent moments for the first three shots are three to four times smaller than RDP moments. Data show that spallation occurred on these shots, as well as permanent deformations detected with ground-based LIDAR. As such, the source medium suffered late-time damage. The late-time damage source model predicts destructive interference between Rg waves radiated by explosion and damage sources, which reduces amplitudes and explains why apparent moments are smaller than RDP moments based on compressional energy emitted directly from the source. SPE-5 was conducted at roughly the same yield-scaled burial depth as SPE-2 and -3, but with five times the yield. As such, the damage source model predicts less reduction of apparent moment. At this writing, preliminary results from Rg interferometry and RDP moments confirm this prediction. SPE-6 is scheduled for the fall of 2016, and it should have the strongest damage source of all SPE shots. The damage model predicts that the polarity of Rg waves could be reversed. Realization of this prediction will be strong confirmation of the late-time damage source model. This abstract has a Los Alamos National Laboratory Unlimited Release Number LA-UR-16-25709.

  17. Open source integrated modeling environment Delta Shell

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  18. Impacts of water quality on the corrosion of cast iron pipes for water distribution and proposed source water switch strategy.

    PubMed

    Hu, Jun; Dong, Huiyu; Xu, Qiang; Ling, Wencui; Qu, Jiuhui; Qiang, Zhimin

    2018-02-01

    Switch of source water may induce "red water" episodes. This study investigated the impacts of water quality on iron release, dissolved oxygen consumption (ΔDO), corrosion scale evolution and bacterial community succession in cast iron pipes used for drinking water distribution at pilot scale, and proposed a source water switch strategy accordingly. Three sets of old cast iron pipe section (named BP, SP and GP) were excavated on site and assembled in a test base, which had historically transported blended water, surface water and groundwater, respectively. Results indicate that an increasing Cl - or SO 4 2- concentration accelerated iron release, but alkalinity and calcium hardness exhibited an opposite tendency. Disinfectant shift from free chlorine to monochloramine slightly inhibited iron release, while the impact of peroxymonosulfate depended on the source water historically transported in the test pipes. The ΔDO was highly consistent with iron release in all three pipe systems. The mass ratio of magnetite to goethite in the corrosion scales of SP was higher than those of BP and GP and kept almost unchanged over the whole operation period. Siderite and calcite formation confirmed that an increasing alkalinity and hardness inhibited iron release. Iron-reducing bacteria decreased in the BP but increased in the SP and GP; meanwhile, sulfur-oxidizing, sulfate-reducing and iron oxidizing bacteria increased in all three pipe systems. To avoid the occurrence of "red water", a source water switch strategy was proposed based on the difference between local and foreign water qualities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Arctic Sea Salt Aerosol from Blowing Snow and Sea Ice Surfaces - a Missing Natural Source in Winter

    NASA Astrophysics Data System (ADS)

    Frey, M. M.; Norris, S. J.; Brooks, I. M.; Nishimura, K.; Jones, A. E.

    2015-12-01

    Atmospheric particles in the polar regions consist mostly of sea salt aerosol (SSA). SSA plays an important role in regional climate change through influencing the surface energy balance either directly or indirectly via cloud formation. SSA irradiated by sunlight also releases very reactive halogen radicals, which control concentrations of ozone, a pollutant and greenhouse gas. However, models under-predict SSA concentrations in the Arctic during winter pointing to a missing source. It has been recently suggested that salty blowing snow above sea ice, which is evaporating, to be that source as it may produce more SSA than equivalent areas of open ocean. Participation in the 'Norwegian Young Sea Ice Cruise (N-ICE 2015)' on board the research vessel `Lance' allowed to test this hypothesis in the Arctic sea ice zone during winter. Measurements were carried out from the ship frozen into the pack ice North of 80º N during February to March 2015. Observations at ground level (0.1-2 m) and from the ship's crows nest (30 m) included number concentrations and size spectra of SSA (diameter range 0.3-10 μm) as well as snow particles (diameter range 50-500 μm). During and after blowing snow events significant SSA production was observed. In the aerosol and snow phase sulfate is fractionated with respect to sea water, which confirms sea ice surfaces and salty snow, and not the open ocean, to be the dominant source of airborne SSA. Aerosol shows depletion in bromide with respect to sea water, especially after sunrise, indicating photochemically driven release of bromine. We discuss the SSA source strength from blowing snow in light of environmental conditions (wind speed, atmospheric turbulence, temperature and snow salinity) and recommend improved model parameterisations to estimate regional aerosol production. N-ICE 2015 results are then compared to a similar study carried out previously in the Weddell Sea during the Antarctic winter.

  20. Characteristics of broadband slow earthquakes explained by a Brownian model

    NASA Astrophysics Data System (ADS)

    Ide, S.; Takeo, A.

    2017-12-01

    Brownian slow earthquake (BSE) model (Ide, 2008; 2010) is a stochastic model for the temporal change of seismic moment release by slow earthquakes, which can be considered as a broadband phenomena including tectonic tremors, low frequency earthquakes, and very low frequency (VLF) earthquakes in the seismological frequency range, and slow slip events in geodetic range. Although the concept of broadband slow earthquake may not have been widely accepted, most of recent observations are consistent with this concept. Then, we review the characteristics of slow earthquakes and how they are explained by BSE model. In BSE model, the characteristic size of slow earthquake source is represented by a random variable, changed by a Gaussian fluctuation added at every time step. The model also includes a time constant, which divides the model behavior into short- and long-time regimes. In nature, the time constant corresponds to the spatial limit of tremor/SSE zone. In the long-time regime, the seismic moment rate is constant, which explains the moment-duration scaling law (Ide et al., 2007). For a shorter duration, the moment rate increases with size, as often observed for VLF earthquakes (Ide et al., 2008). The ratio between seismic energy and seismic moment is constant, as shown in Japan, Cascadia, and Mexico (Maury et al., 2017). The moment rate spectrum has a section of -1 slope, limited by two frequencies corresponding to the above time constant and the time increment of the stochastic process. Such broadband spectra have been observed for slow earthquakes near the trench axis (Kaneko et al., 2017). This spectrum also explains why we can obtain VLF signals by stacking broadband seismograms relative to tremor occurrence (e.g., Takeo et al., 2010; Ide and Yabe, 2014). The fluctuation in BSE model can be non-Gaussian, as far as the variance is finite, as supported by the central limit theorem. Recent observations suggest that tremors and LFEs are spatially characteristic, rather than random (Rubin and Armbruster, 2013; Bostock et al., 2015). Since even spatially characteristic source must be activated randomly in time, moment release from these sources are compatible to the fluctuation in BSE model. Therefore, BSE model contains the model of Gomberg et al. (2016), which suggests that the cluster of LFEs makes VLF signals, as a special case.

  1. Analysis of reactive bromine production and ozone depletion in the Arctic boundary layer using 3-D simulations with GEM-AQ: inference from synoptic-scale patterns

    NASA Astrophysics Data System (ADS)

    Toyota, K.; McConnell, J. C.; Lupu, A.; Neary, L.; McLinden, C. A.; Richter, A.; Kwok, R.; Semeniuk, K.; Kaminski, J. W.; Gong, S.-L.; Jarosz, J.; Chipperfield, M. P.; Sioris, C. E.

    2011-04-01

    Episodes of high bromine levels and surface ozone depletion in the springtime Arctic are simulated by an online air-quality model, GEM-AQ, with gas-phase and heterogeneous reactions of inorganic bromine species and a simple scheme of air-snowpack chemical interactions implemented for this study. Snowpack on sea ice is assumed to be the only source of bromine to the atmosphere and to be capable of converting relatively stable bromine species to photolabile Br2 via air-snowpack interactions. A set of sensitivity model runs are performed for April 2001 at a horizontal resolution of approximately 100 km×100 km in the Arctic, to provide insights into the effects of temperature and the age (first-year, FY, versus multi-year, MY) of sea ice on the release of reactive bromine to the atmosphere. The model simulations capture much of the temporal variations in surface ozone mixing ratios as observed at stations in the high Arctic and the synoptic-scale evolution of areas with enhanced BrO column amount ("BrO clouds") as estimated from satellite observations. The simulated "BrO clouds" are in modestly better agreement with the satellite measurements when the FY sea ice is assumed to be more efficient at releasing reactive bromine to the atmosphere than on the MY sea ice. Surface ozone data from coastal stations used in this study are not sufficient to evaluate unambiguously the difference between the FY sea ice and the MY sea ice as a source of bromine. The results strongly suggest that reactive bromine is released ubiquitously from the snow on the sea ice during the Arctic spring while the timing and location of the bromine release are largely controlled by meteorological factors. It appears that a rapid advection and an enhanced turbulent diffusion associated with strong boundary-layer winds drive transport and dispersion of ozone to the near-surface air over the sea ice, increasing the oxidation rate of bromide (Br-) in the surface snow. Also, if indeed the surface snowpack does supply most of the reactive bromine in the Arctic boundary layer, it appears to be capable of releasing reactive bromine at temperatures as high as -10 °C, particularly on the sea ice in the central and eastern Arctic Ocean. Dynamically-induced BrO column variability in the lowermost stratosphere appears to interfere with the use of satellite BrO column measurements for interpreting BrO variability in the lower troposphere but probably not to the extent of totally obscuring "BrO clouds" that originate from the surface snow/ice source of bromine in the high Arctic. A budget analysis of the simulated air-surface exchange of bromine compounds suggests that a "bromine explosion" occurs in the interstitial air of the snowpack and/or is accelerated by heterogeneous reactions on the surface of wind-blown snow in ambient air, both of which are not represented explicitly in our simple model but could have been approximated by a parameter adjustment for the yield of Br2 from the trigger.

  2. Point source sulphur dioxide peaks and hospital presentations for asthma.

    PubMed

    Donoghue, A M; Thomas, M

    1999-04-01

    To examine the effect on hospital presentations for asthma of brief exposures to sulphur dioxide (SO2) (within the range 0-8700 micrograms/m3) emanating from two point sources in a remote rural city of 25,000 people. A time series analysis of SO2 concentrations and hospital presentations for asthma was undertaken at Mount Isa where SO2 is released into the atmosphere by a copper smelter and a lead smelter. The study examined 5 minute block mean SO2 concentrations and daily hospital presentations for asthma, wheeze, or shortness of breath. Generalised linear models and generalised additive models based on a Poisson distribution were applied. There was no evidence of any positive relation between peak SO2 concentrations and hospital presentations or admissions for asthma, wheeze, or shortness of breath. Brief exposures to high concentrations of SO2 emanating from point sources at Mount Isa do not cause sufficiently serious symptoms in asthmatic people to require presentation to hospital.

  3. The Detection of Evolved Oxygen from the Rocknest Eolian Bedform Material by the Sample Analysis at Mars(SAM) instrument at the Mars Curiosity Landing Site

    NASA Technical Reports Server (NTRS)

    Sutter, B.; Archer, D.; Ming, D.; Eigenbrode, J. L.; Franz, H.; Glavin, D. P.; McAdam, A.; Mahaffy, P.; Stern, J.; Navarro-Gonzalex, R.; hide

    2013-01-01

    The Sample Analysis at Mars (SAM) instrument onboard the Curiosity rover detected an O2 gas release from the Rocknest eolain bedform (Fig. 1). The detection of perchlorate (ClO4-) by the Mars Phoenix Lander s Wet Chemistry Laboratory (WCL) [1] suggests that perchlorate is a possible candidate for evolved O2 release detected by SAM. The perchlorate would also serve as a source of chlorine in the chlorinated hydrocarbons detected by the SAM quadrupole mass spectrometer (QMS) and gas chromatography/mass spectrometer (GCMS) [2,3]. Chlorates (ClO3-) [4,5] and/or superoxides [6] may also be sources of evolved O2 from the Rocknest materials. The work objectives are to 1) evaluate the O2 release temperatures from Rocknest materials, 2) compare these O2 release temperatures with a series of perchlorates and chlorates, and 3) evaluate superoxide O2- sources and possible perchlorate interactions with other Rocknest phases during QMS analysis.

  4. Markers of inflammation in alveolar cells exposed to fine particulate matter from prescribed fires and urban air.

    PubMed

    Myatt, Theodore A; Vincent, Michael S; Kobzik, Lester; Naeher, Luke P; MacIntosh, David L; Suh, Helen

    2011-10-01

    To assess the effect of fine particulate matter (PM(2.5)) from different particle sources on tumor necrosis factor- (TNF-) α, we measured TNF production from rat alveolar macrophages (AM) and human dendritic cells (DC) exposed to PM(2.5) from different sources. Fire-related PM(2.5) samples, rural ambient, and urban indoor and outdoor samples were collected in the Southeast United States. Tumor necrosis factor release was measured from rat AM and human DC following incubation with PM(2.5). Tumor necrosis factor release in AMs was greatest for fire-related PM(2.5) compared with other samples (TNF: P value = 0.005; mortality: P value = 0.005). Tumor necrosis factor releases from the DCs and AMs exposed to fire-associated PM(2.5) were strongly correlated (r = 0.87, P value < 0.0001). Particulate matter exposure produces TNF release consistent with pulmonary inflammation in rat AMs and human DCs, with the response in rat AMs differing by particle source.

  5. Identification of dust storm source areas in West Asia using multiple environmental datasets.

    PubMed

    Cao, Hui; Amiraslani, Farshad; Liu, Jian; Zhou, Na

    2015-01-01

    Sand and Dust storms are common phenomena in arid and semi-arid areas. West Asia Region, especially Tigris-Euphrates alluvial plain, has been recognized as one of the most important dust source areas in the world. In this paper, a method is applied to extract SDS (Sand and Dust Storms) sources in West Asia region using thematic maps, climate and geography, HYSPLIT model and satellite images. Out of 50 dust storms happened during 2000-2013 and collected in form of MODIS images, 27 events were incorporated as demonstrations of the simulated trajectories by HYSPLIT model. Besides, a dataset of the newly released Landsat images was used as base-map for the interpretation of SDS source regions. As a result, six main clusters were recognized as dust source areas. Of which, 3 clusters situated in Tigris-Euphrates plain were identified as severe SDS sources (including 70% dust storms in this research). Another cluster in Sistan plain is also a potential source area. This approach also confirmed six main paths causing dust storms. These paths are driven by the climate system including Siberian and Polar anticyclones, monsoon from Indian Subcontinent and depression from north of Africa. The identification of SDS source areas and paths will improve our understandings on the mechanisms and impacts of dust storms on socio-economy and environment of the region. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 4. Driller's logs, stratigraphic cross section and utility routines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the fourth of four volumes of the description of the CIRMIS Data System.« less

  7. Earth Gravitational Model 2020

    NASA Astrophysics Data System (ADS)

    Barnes, Daniel; Holmes, Simon; Factor, John; Ingalls, Sarah; Presicci, Manny; Beale, James

    2017-04-01

    The National Geospatial-Intelligence Agency [NGA], in conjunction with its U.S. and international partners, has begun preliminary work on its next Earth Gravitational Model, to replace EGM2008. The new 'Earth Gravitational Model 2020' [EGM2020] has an expected public release date of 2020, and will likely retain the same harmonic basis and resolution as EGM2008. As such, EGM2020 will be essentially an ellipsoidal harmonic model up to degree (n) and order (m) 2159, but will be released as a spherical harmonic model to degree 2190 and order 2159. EGM2020 will benefit from new data sources and procedures. Updated satellite gravity information from the GOCE and GRACE mission, will better support the lower harmonics, globally. Multiple new acquisitions (terrestrial, airborne and ship borne) of gravimetric data over specific geographical areas, will provide improved global coverage and resolution over the land, as well as for coastal and some ocean areas. Ongoing accumulation of satellite altimetry data as well as improvements in the treatment of this data, will better define the marine gravity field, most notably in polar and near-coastal regions. NGA and partners are evaluating different approaches for optimally combining the new GOCE/GRACE satellite gravity models with the terrestrial data. These include the latest methods employing a full covariance adjustment. NGA is also working to assess systematically the quality of its entire gravimetry database, towards correcting biases and other egregious errors where possible, and generating improved error models that will inform the final combination with the latest satellite gravity models. Outdated data gridding procedures have been replaced with improved approaches. For EGM2020, NGA intends to extract maximum value from the proprietary data that overlaps geographically with unrestricted data, whilst also making sure to respect and honor its proprietary agreements with its data-sharing partners. Approved for Public Release, 15-564

  8. The Core Flight System (cFS) Community: Providing Low Cost Solutions for Small Spacecraft

    NASA Technical Reports Server (NTRS)

    McComas, David; Wilmot, Jonathan; Cudmore, Alan

    2016-01-01

    In February 2015 the NASA Goddard Space Flight Center (GSFC) completed the open source release of the entire Core Flight Software (cFS) suite. After the open source release a multi-NASA center Configuration Control Board (CCB) was established that has managed multiple cFS product releases. The cFS was developed and is being maintained in compliance with the NASA Class B software development process requirements and the open source release includes all Class B artifacts. The cFS is currently running on three operational science spacecraft and is being used on multiple spacecraft and instrument development efforts. While the cFS itself is a viable flight software (FSW) solution, we have discovered that the cFS community is a continuous source of innovation and growth that provides products and tools that serve the entire FSW lifecycle and future mission needs. This paper summarizes the current state of the cFS community, the key FSW technologies being pursued, the development/verification tools and opportunities for the small satellite community to become engaged. The cFS is a proven high quality and cost-effective solution for small satellites with constrained budgets.

  9. Seasonally-Dynamic SPARROW Modeling of Nitrogen Flux Using Earth Observation Data

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A. B.; Moore, R. B.; Shih, J.; Nolin, A. W.; Macauley, M.; Alexander, R. B.

    2013-12-01

    SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe dynamically calibrated SPARROW models of total nitrogen flux in three sub-regional watersheds: the Potomac River Basin, Long Island Sound drainage, and coastal South Carolina drainage. The models are based on seasonal water quality and watershed input data for a total 170 monitoring stations for the period 2001 to 2008. Frequently-reported, spatially-detailed input data on the phenology of agricultural production, terrestrial vegetation growth, and snow melt are often challenging requirements of seasonal modeling of reactive nitrogen. In this NASA-funded research, we use Enhanced Vegetation Index (EVI), gross primary production and snow/ice cover data from MODIS to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The spatial reference frames of the models are 1:100,000-scale stream networks, and the computational time steps are 0.25-year seasons. Precipitation and temperature data are from PRISM. The model formulation accounts for storage of nitrogen from nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Once calibrated, model source terms based on previous season export allow for recursive dynamic simulation of stream flux: gradual increases or decreases in export occur as source supply rates and hydrologic forcing change. Based on an assumption that removal of nitrogen from watershed storage to stream channels and to 'permanent' sinks (e.g. the atmosphere and deep groundwater) occur as parallel first-order processes, the models can be used to estimate the approximate residence times of nonpoint source nitrogen in the watersheds.

  10. Source term estimates of radioxenon released from the BaTek medical isotope production facility using external measured air concentrations.

    PubMed

    Eslinger, Paul W; Cameron, Ian M; Dumais, Johannes Robert; Imardjoko, Yudi; Marsoem, Pujadi; McIntyre, Justin I; Miley, Harry S; Stoehlker, Ulrich; Widodo, Susilo; Woods, Vincent T

    2015-10-01

    BATAN Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies (99m)Tc for use in medical procedures. Atmospheric releases of (133)Xe in the production process at BaTek are known to influence the measurements taken at the closest stations of the radionuclide network of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The major xenon isotopes released from BaTek are also produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analysts trying to decide if a specific measurement result could have originated from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84 × 10(13) Bq of (133)Xe. Concentrations of (133)Xe in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88 × 10(13) Bq. The same optimization process yielded a release estimate of 1.70 × 10(13) Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10% of each other. Unpublished production data and the release estimate from June 2013 yield a rough annual release estimate of 8 × 10(14) Bq of (133)Xe in 2014. These multiple lines of evidence cross-validate the stack release estimates and the release estimates based on atmospheric samplers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  12. Long boundary drainage as a source of lahars: Can big cracks make big floods?

    NASA Astrophysics Data System (ADS)

    Johnson, P. J.; Valentine, G.; Lowry, C.; Sonder, I.; Stauffer, P. H.; Santacoloma, C.; Pulgarín, B.; Adriana, A.

    2016-12-01

    Two phreatic eruptions in 2007 at Nevado del Huila Volcano, Colombia, were associated with the formation of very large (estimated 2,000 m long by 50 m wide) fracture systems at the summit of the volcano. Lahars with volumes up to 75 million m3 followed formation of these fissures, damaging villages downstream. Previous work suggested that water for these lahars was sourced at least in part by groundwater within the edifice that was rapidly expelled during the eruptions. The mechanisms for the rapid release of large volumes of water are unclear and cannot be uniquely constrained based on available data, leading to multiple conceptual models. We examine one conceptual model in which water discharge results primarily from gravity-driven drainage through the walls of the newly-opened fracture. The presence of a topographically low pour-point on the side of the edifice allows this leaking water to escape, forming a lahar. Steady state calculations and numerical modeling of transient-state fluid flow using the Finite Element Heat and Mass model (FEHM, https://fehm.lanl.gov) are used to estimate the resulting discharges. Our results show that a pulse of water is rapidly released through a new fracture, with discharge decreasing as the medium drains. Emitted water volumes depend on the dimensions of the crack and particularly sensitive to the height of the draining fracture wall and the magnitude of the permeability increase created during the fracturing event. Hazardous lahar events could be generated by this mechanism if large cracks form in water-bearing edifices elsewhere.

  13. Source of released carbon fibers

    NASA Technical Reports Server (NTRS)

    Bell, V. L.

    1979-01-01

    The potential for the release of carbon fibers from aircraft crashes/fires is addressed. Simulation of the conditions of aircraft crash fires in order to predict the quantities and forms of fibrous materials which might be released from civilian aircraft crashes/fires is considered. Figures are presented which describe some typical fiber release test activities together with some very preliminary results of those activities. The state of the art of carbon fiber release is summarized as well as some of the uncertainties concerning accidental fiber release.

  14. Assessment of well vulnerability for groundwater source protection based on a solute transport model: a case study from Jilin City, northeast China

    NASA Astrophysics Data System (ADS)

    Huan, Huan; Wang, Jinsheng; Lai, Desheng; Teng, Yanguo; Zhai, Yuanzheng

    2015-05-01

    Well vulnerability assessment is essential for groundwater source protection. A quantitative approach to assess well vulnerability in a well capture zone is presented, based on forward solute transport modeling. This method was applied to three groundwater source areas (Jiuzhan, Hadawan and Songyuanhada) in Jilin City, northeast China. The ratio of the maximum contaminant concentration at the well to the released concentration at the contamination source ( c max/ c 0) was determined as the well vulnerability indicator. The results indicated that well vulnerability was higher close to the pumping well. The well vulnerability in each groundwater source area was low. Compared with the other two source areas, the cone of depression at Jiuzhan resulted in higher spatial variability of c max/ c 0 and lower minimum c max/ c 0 by three orders of magnitude. Furthermore, a sensitivity analysis indicated that the denitrification rate in the aquifer was the most sensitive with respect to well vulnerability. A process to derive a NO3-N concentration at the pumping well is presented, based on determining the maximum nitrate loading limit to satisfy China's drinking-water quality standards. Finally, the advantages, disadvantages and prospects for improving the precision of this well vulnerability assessment approach are discussed.

  15. Empirical calibration of uranium releases in the terrestrial environment of nuclear fuel cycle facilities.

    PubMed

    Pourcelot, Laurent; Masson, Olivier; Saey, Lionel; Conil, Sébastien; Boulet, Béatrice; Cariou, Nicolas

    2017-05-01

    In the present paper the activity of uranium isotopes measured in plants and aerosols taken downwind of the releases of three nuclear fuel settlements was compared between them and with the activity measured at remote sites. An enhancement of 238 U activity as well as 235 U/ 238 U anomalies and 236 U are noticeable in wheat, grass, tree leaves and aerosols taken at the edge of nuclear fuel settlements, which show the influence of uranium chronic releases. Further plants taken at the edge of the studied sites and a few published data acquired in the same experimental conditions show that the 238 U activity in plants is influenced by the intensity of the U atmospheric releases. Assuming that 238 U in plant is proportional to the intensity of the releases, we proposed empirical relationships which allow to characterize the chronic releases on the ground. Other sources of U contamination in plants such as accidental releases and "delayed source" of uranium in soil are also discussed in the light of uranium isotopes signatures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  17. The Best Estimated Trajectory Analysis for Pad Abort One

    NASA Technical Reports Server (NTRS)

    Kutty, Prasad; Noonan, Meghan; Karlgaard, Christopher; Beck, Roger

    2011-01-01

    I. Best Estimated Trajectory (BET) objective: a) Produce reconstructed trajectory of the PA-1 flight to understand vehicle dynamics and aid other post flight analyses. b) Leverage all measurement sources taken of vehicle during flight to produce the most accurate estimate of vehicle trajectory. c) Generate trajectory reconstructions of the Crew Module (CM), Launch Abort System (LAS), and Forward Bay Cover (FBC). II. BET analysis was started immediately following the PA-1 mission and was completed in September, 2010 a) Quick look version of BET released 5/25/2010: initial repackaging of SIGI data. b) Preliminary version of BET released 7/6/2010: first blended solution using available sources of external measurements. c) Final version of BET released 9/1/2010: final blended solution using all available sources of data.

  18. Rapidly locating and characterizing pollutant releases in buildings.

    PubMed

    Sohn, Michael D; Reynolds, Pamela; Singh, Navtej; Gadgil, Ashok J

    2002-12-01

    Releases of airborne contaminants in or near a building can lead to significant human exposures unless prompt response measures are taken. However, possible responses can include conflicting strategies, such as shutting the ventilation system off versus running it in a purge mode or having occupants evacuate versus sheltering in place. The proper choice depends in part on knowing the source locations, the amounts released, and the likely future dispersion routes of the pollutants. We present an approach that estimates this information in real time. It applies Bayesian statistics to interpret measurements of airborne pollutant concentrations from multiple sensors placed in the building and computes best estimates and uncertainties of the release conditions. The algorithm is fast, capable of continuously updating the estimates as measurements stream in from sensors. We demonstrate the approach using a hypothetical pollutant release in a five-room building. Unknowns to the interpretation algorithm include location, duration, and strength of the source, and some building and weather conditions. Two sensor sampling plans and three levels of data quality are examined. Data interpretation in all examples is rapid; however, locating and characterizing the source with high probability depends on the amount and quality of data and the sampling plan.

  19. Assessment of commercially available pheromone lures for monitoring diamondback moth (Lepidoptera: Plutellidae) in canola.

    PubMed

    Evenden, M L; Gries, R

    2010-06-01

    Sex pheromone monitoring lures from five different commercial sources were compared for their attractiveness to male diamondback moth, Plutella xylostella L. (Lepidoptera: Plutellidae) in canola, Brassica napus L., fields in western Canada. Lures that had the highest pheromone release rate, as determined by aeration analyses in the laboratory, were the least attractive in field tests. Lures from all the commercial sources tested released more (Z)-11-hexadecenal than (Z)-11-hexadecenyl acetate and the most attractive lures released a significantly higher aldehyde to acetate ratio than less attractive lures. Traps baited with sex pheromone lures from APTIV Inc. (Portland, OR) and ConTech Enterprises Inc. (Delta, BC, Canada) consistently captured more male diamondback moths than traps baited with lures from the other sources tested. In two different lure longevity field trapping experiments, older lures were more attractive to male diamondback moths than fresh lures. Pheromone release from aged lures was constant at very low release rates. The most attractive commercially available sex pheromone lures tested attracted fewer diamondback moth males than calling virgin female moths suggesting that research on the development of a more attractive synthetic sex pheromone lure is warranted.

  20. Release and microbial degradation of dissolved organic matter (DOM) from the macroalgae Ulva prolifera.

    PubMed

    Zhang, Tao; Wang, Xuchen

    2017-12-15

    Release and microbial degradation of dissolved organic matter (DOM) and chromophoric dissolved organic matter (CDOM) from the macroalgae Ulva prolifera were studied in laboratory incubation experiments. The release of DOM and CDOM from Ulva prolifera was a rapid process, and hydrolysis played an important role in the initial leaching of the organic compounds from the algae. Bacterial activity enhanced the release of DOM and CDOM during degradation of the algae and utilization of the released organic compounds. It is calculated that 43±2% of the C and 63±3% of the N from Ulva prolifera's biomass were released during the 20-day incubation, and 65±3% of the released C and 87±4% of the released N were utilized by bacteria. In comparison, only 18±1% of the algae's C and 17±1% of its N were released when bacterial activities were inhibited. The fluorescence characteristics of the CDOM indicate that protein-like DOM was the major organic component released from Ulva prolifera that was highly labile and biodegradable. Bacteria played an important role in regulating the chemical composition and fluorescence characteristics of the DOM. Our study suggests that the release of DOM from Ulva prolifera provides not only major sources of organic C and N, but also important food sources to microbial communities in coastal waters. Copyright © 2017 Elsevier Ltd. All rights reserved.

Top