Science.gov

Sample records for reaction model code

  1. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.; Herman, M.; Capote, R.

    2014-06-01

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  2. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  3. MOMDIS: a Glauber model computer code for knockout reactions

    NASA Astrophysics Data System (ADS)

    Bertulani, C. A.; Gade, A.

    2006-09-01

    A computer program is described to calculate momentum distributions in stripping and diffraction dissociation reactions. A Glauber model is used with the scattering wavefunctions calculated in the eikonal approximation. The program is appropriate for knockout reactions at intermediate energy collisions ( 30 MeV⩽E/nucleon⩽2000 MeV). It is particularly useful for reactions involving unstable nuclear beams, or exotic nuclei (e.g., neutron-rich nuclei), and studies of single-particle occupancy probabilities (spectroscopic factors) and other related physical observables. Such studies are an essential part of the scientific program of radioactive beam facilities, as in for instance the proposed RIA (Rare Isotope Accelerator) facility in the US. Program summaryTitle of program: MOMDIS (MOMentum DIStributions) Catalogue identifier:ADXZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXZ_v1_0 Computers: The code has been created on an IBM-PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: 6255 No. of bytes in distributed program, including test data, etc.: 63 568 Distribution format: tar.gz Nature of physical problem: The program calculates bound wavefunctions, eikonal S-matrices, total cross-sections and momentum distributions of interest in nuclear knockout reactions at intermediate energies. Method of solution: Solves the radial Schrödinger equation for bound states. A Numerov integration is used outwardly and inwardly and a matching at the nuclear surface is done to obtain the energy and the bound state wavefunction with good accuracy. The S-matrices are obtained using eikonal wavefunctions and the "t- ρρ" method to obtain the eikonal phase-shifts. The momentum distributions are obtained by means of a Gaussian expansion of

  4. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    SciTech Connect

    Rakhno, I. L.; Mokhov, N. V.; Gudima, K. K.

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  5. SurfKin: an ab initio kinetic code for modeling surface reactions.

    PubMed

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts.

  6. PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations

    NASA Astrophysics Data System (ADS)

    Elmaghraby, Elsayed K.

    2009-09-01

    The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon

  7. Applications of Transport/Reaction Codes to Problems in Cell Modeling

    SciTech Connect

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-11-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

  8. THERM: a computer code for estimating thermodynamic properties for species important to combustion and reaction modeling.

    PubMed

    Ritter, E R

    1991-08-01

    A computer package has been developed called THERM, an acronym for THermodynamic property Estimation for Radicals and Molecules. THERM is a versatile computer code designed to automate the estimation of ideal gas phase thermodynamic properties for radicals and molecules important to combustion and reaction-modeling studies. Thermodynamic properties calculated include heat of formation and entropies at 298 K and heat capacities from 300 to 1500 K. Heat capacity estimates are then extrapolated to above 5000 K, and NASA format polynomial thermodynamic property representations valid from 298 to 5000 K are generated. This code is written in Microsoft Fortran version 5.0 for use on machines running under MSDOS. THERM uses group additivity principles of Benson and current best values for bond strengths, changes in entropy, and loss of vibrational degrees of freedom to estimate properties for radical species from parent molecules. This ensemble of computer programs can be used to input literature data, estimate data when not available, and review, update, and revise entries to reflect improvements and modifications to the group contribution and bond dissociation databases. All input and output files are ASCII so that they can be easily edited, updated, or expanded. In addition, heats of reaction, entropy changes, Gibbs free-energy changes, and equilibrium constants can be calculated as functions of temperature from a NASA format polynomial database.

  9. Uncertainty evaluation of nuclear reaction model parameters using integral and microscopic measurements. Covariances evaluation with CONRAD code

    NASA Astrophysics Data System (ADS)

    de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.

    2010-10-01

    In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.

  10. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  11. An interactive code (NETPATH) for modeling NET geochemical reactions along a flow PATH, version 2.0

    USGS Publications Warehouse

    Plummer, L. Niel; Prestemon, Eric C.; Parkhurst, David L.

    1994-01-01

    NETPATH is an interactive Fortran 77 computer program used to interpret net geochemical mass-balance reactions between an initial and final water along a hydrologic flow path. Alternatively, NETPATH computes the mixing proportions of two to five initial waters and net geochemical reactions that can account for the observed composition of a final water. The program utilizes previously defined chemical and isotopic data for waters from a hydrochemical system. For a set of mineral and (or) gas phases hypothesized to be the reactive phases in the system, NETPATH calculates the mass transfers in every possible combination of the selected phases that accounts for the observed changes in the selected chemical and (or) isotopic compositions observed along the flow path. The calculations are of use in interpreting geochemical reactions, mixing proportions, evaporation and (or) dilution of waters, and mineral mass transfer in the chemical and isotopic evolution of natural and environmental waters. Rayleigh distillation calculations are applied to each mass-balance model that satisfies the constraints to predict carbon, sulfur, nitrogen, and strontium isotopic compositions at the end point, including radiocarbon dating. DB is an interactive Fortran 77 computer program used to enter analytical data into NETPATH, and calculate the distribution of species in aqueous solution. This report describes the types of problems that can be solved, the methods used to solve problems, and the features available in the program to facilitate these solutions. Examples are presented to demonstrate most of the applications and features of NETPATH. The codes DB and NETPATH can be executed in the UNIX or DOS1 environment. This report replaces U.S. Geological Survey Water-Resources Investigations Report 91-4078, by Plummer and others, which described the original release of NETPATH, version 1.0 (dated December, 1991), and documents revisions and enhancements that are included in version 2.0. 1 The

  12. The nuclear reaction code McGNASH.

    SciTech Connect

    Talou, P.; Chadwick, M. B.; Chadwick, M B; Young, P. G. ,; Kawano, T.

    2004-01-01

    McGNASH is a modern statitistical/preequilibrium nuclear reaction code, being developed at Los Alamos, which can simulate neutron-, proton- and photon-induced reactions in the energy range from a few-keV to about 150 MeV. It is written in modern Fortran 95 scientific language, offering new capabilities both for the developer and the user. McGNASH is still in a development stage, and a first public release is planned for later in 2005. The statisticaUpre-equilibrium nuclear reaction code GNASH has been used successfully over the years to compute neutron-, proton- and photon-induced reactions cross sections on a variety of nuclei targets, and for incident particle energies from tens of keV up to 150-200 MeV. This code has been instrumental in producing numerous nuclear data evaluation files for various ENDF libraries around the World, and in particular the ENDFB-VI and pre-ENDFB-VII libraries in the US. More recently, GNASH was used extensively for the creation of the LA1501ibrary, including data on neutron- and proton-induced reactions up to 150 MeV incident energy. We are now developing a modern version of the code, called McGNASH.

  13. Transfer reaction code with nonlocal interactions

    NASA Astrophysics Data System (ADS)

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-10-01

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d , N) or (N , d) , including nonlocal nucleon-target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d , N) B or B(N , d) A. Details on the implementation of the T-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided. This code is suitable to be applied for deuteron induced reactions in the range of Ed =10-70 MeV, and provides cross sections with 4% accuracy.

  14. Transfer reaction code with nonlocal interactions

    SciTech Connect

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    Here, we present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N) or (N,d), including nonlocal nucleon-target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are dif- ferential angular distributions for the cross sections of A(d,N)B or B(N,d)A. Details on the implementation of the T-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided. This code is suitable to be applied for deuteron induced reactions in the range of Ed = 10–70 MeV, and provides cross sections with 4% accuracy.

  15. Transfer reaction code with nonlocal interactions

    SciTech Connect

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N)(d,N) or (N,d)(N,d), including nonlocal nucleon–target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d,N)BA(d,N)B or B(N,d)AB(N,d)A. Details on the implementation of the TT-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided. This code is suitable to be applied for deuteron induced reactions in the range of View the MathML sourceEd=10–70MeV, and provides cross sections with 4% accuracy.

  16. Transfer reaction code with nonlocal interactions

    DOE PAGES

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N)(d,N) or (N,d)(N,d), including nonlocal nucleon–target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d,N)BA(d,N)B or B(N,d)AB(N,d)A. Details on the implementation of the TT-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided.more » This code is suitable to be applied for deuteron induced reactions in the range of View the MathML sourceEd=10–70MeV, and provides cross sections with 4% accuracy.« less

  17. Nuclear reactions in Monte Carlo codes.

    PubMed

    Ferrari, A; Sala, P R

    2002-01-01

    The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.

  18. Molecular codes in biological and chemical reaction networks.

    PubMed

    Görlich, Dennis; Dittrich, Peter

    2013-01-01

    Shannon's theory of communication has been very successfully applied for the analysis of biological information. However, the theory neglects semantic and pragmatic aspects and thus cannot directly be applied to distinguish between (bio-) chemical systems able to process "meaningful" information from those that do not. Here, we present a formal method to assess a system's semantic capacity by analyzing a reaction network's capability to implement molecular codes. We analyzed models of chemical systems (martian atmosphere chemistry and various combustion chemistries), biochemical systems (gene expression, gene translation, and phosphorylation signaling cascades), an artificial chemistry, and random reaction networks. Our study suggests that different chemical systems possess different semantic capacities. No semantic capacity was found in the model of the martian atmosphere chemistry, the studied combustion chemistries, and highly connected random networks, i.e. with these chemistries molecular codes cannot be implemented. High semantic capacity was found in the studied biochemical systems and in random reaction networks where the number of second order reactions is twice the number of species. We conclude that our approach can be applied to evaluate the information processing capabilities of a chemical system and may thus be a useful tool to understand the origin and evolution of meaningful information, e.g. in the context of the origin of life.

  19. A chemical reaction network solver for the astrophysics code NIRVANA

    NASA Astrophysics Data System (ADS)

    Ziegler, U.

    2016-02-01

    Context. Chemistry often plays an important role in astrophysical gases. It regulates thermal properties by changing species abundances and via ionization processes. This way, time-dependent cooling mechanisms and other chemistry-related energy sources can have a profound influence on the dynamical evolution of an astrophysical system. Modeling those effects with the underlying chemical kinetics in realistic magneto-gasdynamical simulations provide the basis for a better link to observations. Aims: The present work describes the implementation of a chemical reaction network solver into the magneto-gasdynamical code NIRVANA. For this purpose a multispecies structure is installed, and a new module for evolving the rate equations of chemical kinetics is developed and coupled to the dynamical part of the code. A small chemical network for a hydrogen-helium plasma was constructed including associated thermal processes which is used in test problems. Methods: Evolving a chemical network within time-dependent simulations requires the additional solution of a set of coupled advection-reaction equations for species and gas temperature. Second-order Strang-splitting is used to separate the advection part from the reaction part. The ordinary differential equation (ODE) system representing the reaction part is solved with a fourth-order generalized Runge-Kutta method applicable for stiff systems inherent to astrochemistry. Results: A series of tests was performed in order to check the correctness of numerical and technical implementation. Tests include well-known stiff ODE problems from the mathematical literature in order to confirm accuracy properties of the solver used as well as problems combining gasdynamics and chemistry. Overall, very satisfactory results are achieved. Conclusions: The NIRVANA code is now ready to handle astrochemical processes in time-dependent simulations. An easy-to-use interface allows implementation of complex networks including thermal processes

  20. PLATYPUS: A code for reaction dynamics of weakly-bound nuclei at near-barrier energies within a classical dynamical model

    NASA Astrophysics Data System (ADS)

    Diaz-Torres, Alexis

    2011-04-01

    A self-contained Fortran-90 program based on a three-dimensional classical dynamical reaction model with stochastic breakup is presented, which is a useful tool for quantifying complete and incomplete fusion, and breakup in reactions induced by weakly-bound two-body projectiles near the Coulomb barrier. The code calculates (i) integrated complete and incomplete fusion cross sections and their angular momentum distribution, (ii) the excitation energy distribution of the primary incomplete-fusion products, (iii) the asymptotic angular distribution of the incomplete-fusion products and the surviving breakup fragments, and (iv) breakup observables, such as angle, kinetic energy and relative energy distributions. Program summaryProgram title: PLATYPUS Catalogue identifier: AEIG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 342 No. of bytes in distributed program, including test data, etc.: 344 124 Distribution format: tar.gz Programming language: Fortran-90 Computer: Any Unix/Linux workstation or PC with a Fortran-90 compiler Operating system: Linux or Unix RAM: 10 MB Classification: 16.9, 17.7, 17.8, 17.11 Nature of problem: The program calculates a wide range of observables in reactions induced by weakly-bound two-body nuclei near the Coulomb barrier. These include integrated complete and incomplete fusion cross sections and their spin distribution, as well as breakup observables (e.g. the angle, kinetic energy, and relative energy distributions of the fragments). Solution method: All the observables are calculated using a three-dimensional classical dynamical model combined with the Monte Carlo sampling of probability-density distributions. See Refs. [1,2] for further details. Restrictions: The

  1. Modeling Mechanochemical Reaction Mechanisms.

    PubMed

    Adams, Heather; Miller, Brendan P; Furlong, Octavio J; Fantauzzi, Marzia; Navarra, Gabriele; Rossi, Antonella; Xu, Yufu; Kotvis, Peter V; Tysoe, Wilfred T

    2017-08-09

    The mechanochemical reaction between copper and dimethyl disulfide is studied under well-controlled conditions in ultrahigh vacuum (UHV). Reaction is initiated by fast S-S bond scission to form adsorbed methyl thiolate species, and the reaction kinetics are reproduced by two subsequent elementary mechanochemical reaction steps, namely a mechanochemical decomposition of methyl thiolate to deposit sulfur on the surface and evolve small, gas-phase hydrocarbons, and sliding-induced oxidation of the copper by sulfur that regenerates vacant reaction sites. The steady-state reaction kinetics are monitored in situ from the variation in the friction force as the reaction proceeds and modeled using the elementary-step reaction rate constants found for monolayer adsorbates. The analysis yields excellent agreement between the experiment and the kinetic model, as well as correctly predicting the total amount of subsurface sulfur in the film measured using Auger spectroscopy and the sulfur depth distribution measured by angle-resolved X-ray photoelectron spectroscopy.

  2. Development of a code system DEURACS for theoretical analysis and prediction of deuteron-induced reactions

    NASA Astrophysics Data System (ADS)

    Nakayama, Shinsuke; Kouno, Hiroshi; Watanabe, Yukinobu; Iwamoto, Osamu; Ye, Tao; Ogata, Kazuyuki

    2017-09-01

    We have developed an integrated code system dedicated for theoretical analysis and prediction of deuteron-induced reactions, which is called DEUteron-induced Reaction Analysis Code System (DEURACS). DEURACS consists of several calculation codes based on theoretical models to describe respective reaction mechanisms and it was successfully applied to (d,xp) and (d,xn) reactions. In the present work, the analysis of (d,xn) reactions is extended to higher incident energy up to nearly 100 MeV and also DEURACS is applied to (d,xd) reactions at 80 and 100 MeV. The DEURACS calculations reproduce the experimental double-differential cross sections for the (d,xn) and (d,xd) reactions well.

  3. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    SciTech Connect

    Iwamoto, O. Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-15

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  4. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  5. Modeling of surface reactions

    SciTech Connect

    Ray, T.R.

    1993-01-01

    Mathematical models are used to elucidate properties of the monomer-monomer and monomer-dimer type chemical reactions on a two-dimensional surface. The authors use mean-field and lattice gas models, detailing similarities and differences due to correlations in the lattice gas model. The monomer-monomer, or AB surface reaction model, with no diffusion, is investigated for various reaction rates k. Study of the exact rate equations reveals that poisoning always occurs if the adsorption rates of the reactants are unequal. If the adsorption rates of the reactants are equal, simulations show slow poisoning, associated with clustering of reactants. This behavior is also shown for the two-dimensional voter model. The authors analyze precisely the slow poisoning kinetics by an analytic treatment for the AB reaction with infinitesimal reaction rate, and by direct comparison with the voter model. They extend the results to incorporate the effects of place-exchange diffusion, and they compare the AB reaction with infinitesimal reaction rate and no diffusion to the voter model with diffusion at rate 1/2. They also consider the relationship of the voter model to the monomer-dimer model, and investigate the latter model for small reaction rates. The monomer-dimer, or AB[sub 2] surface reaction model is also investigated. Specifically, they consider the ZGB-model for CO-oxidation, and in generalizations of this model which include adspecies diffusion. A theory of nucleation to describe properties of non-equilibrium first-order transitions, specifically the evolution between [open quote]reactive[close quote] steady states and trivial adsorbing states, is derived. The behavior of the [open quote]epidemic[close quote] survival probability, P[sub s], for a non-poisoned patch surrounded by a poisoned background is determined below the poisoning transition.

  6. Cheetah: Starspot modeling code

    NASA Astrophysics Data System (ADS)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  7. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  8. Interfacing the JQMD and JAM Nuclear Reaction Codes to Geant4

    SciTech Connect

    Koi, Tatsumi

    2003-06-17

    Geant4 is a toolkit for the simulation of the passage of particles through matter. It provides a comprehensive set of tools for geometry, tracking, detector response, run, event and track management, visualization and user interfaces. Geant4 also has an abundant set of physics models that handle the diverse interactions of particles with matter across a wide energy range. However, there are also many well-established reaction codes currently used in the same fields where Geant4 is applied. In order to take advantage of these codes, we began to investigate their use from within the framework of Geant4. The first codes chosen for this investigation were the Jaeri Quantum Molecular Dynamics (JQMD) and Jet AA Microscopic Transportation Model (JAM) codes. JQMD is a QMD model code which is widely used to analyze various aspects of heavy ion reactions. JAM is a hadronic cascade model code which explicitly treats all established hadronic states, including resonances with explicit spin and isospin, as well as their anti-particles. We successfully developed interfaces between these codes and Geant4. These allow a user to construct a detector using the powerful material and geometrical capabilities of Geant4, while at the same time implementing nuclear reactions handled by the JQMD and JAM models and the Hadronic framework of Geant4 proved its flexibility and expandability.

  9. Code System to Calculate Integral Parameters with Reaction Rates from WIMS Output.

    SciTech Connect

    LESZCZYNSKI, FRANCISCO

    1994-10-25

    Version 00 REACTION calculates different integral parameters related to neutron reactions on reactor lattices, from reaction rates calculated with WIMSD4 code, and comparisons with experimental values.

  10. Visualized kinematics code for two-body nuclear reactions

    NASA Astrophysics Data System (ADS)

    Lee, E. J.; Chae, K. Y.

    2016-05-01

    The one or few nucleon transfer reaction has been a great tool for investigating the single-particle properties of a nucleus. Both stable and exotic beams are utilized to study transfer reactions in normal and inverse kinematics, respectively. Because many energy levels of the heavy recoil from the two-body nuclear reaction can be populated by using a single beam energy, identifying each populated state, which is not often trivial owing to high level-density of the nucleus, is essential. For identification of the energy levels, a visualized kinematics code called VISKIN has been developed by utilizing the Java programming language. The development procedure, usage, and application of the VISKIN is reported.

  11. Serpentinization reaction pathways: implications for modeling approach

    SciTech Connect

    Janecky, D.R.

    1986-01-01

    Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.

  12. Biogeochemical Transport and Reaction Model (BeTR) v1

    SciTech Connect

    TANG, JINYUN

    2016-04-18

    The Biogeochemical Transport and Reaction Model (BeTR) is a F90 code that enables reactive transport modeling in land modules of earth system models (e.g. CESM, ACME). The code adopts the Objective-Oriented-Design, and allows users to plug in their own biogeochemical (BGC) formulations/codes, and compare them to other existing BGC codes in those ESMs. The code takes information of soil physics variables, such as variables, such as temperature, moisture, soil density profile; water flow, etc., from a land model to track the movement of different chemicals in presence of biogeochemical reactions.

  13. Coding the Assembly of Polyoxotungstates with a Programmable Reaction System.

    PubMed

    Ruiz de la Oliva, Andreu; Sans, Victor; Miras, Haralampos N; Long, De-Liang; Cronin, Leroy

    2017-05-01

    Chemical transformations are normally conducted in batch or flow mode, thereby allowing the chemistry to be temporally or spatially controlled, but these approaches are not normally combined dynamically. However, the investigation of the underlying chemistry masked by the self-assembly processes that often occur in one-pot reactions and exploitation of the potential of complex chemical systems requires control in both time and space. Additionally, maintaining the intermediate constituents of a self-assembled system "off equilibrium" and utilizing them dynamically at specific time intervals provide access to building blocks that cannot coexist under one-pot conditions and ultimately to the formation of new clusters. Herein, we implement the concept of a programmable networked reaction system, allowing us to connect discrete "one-pot" reactions that produce the building block{W11O38} ≡ {W11} under different conditions and control, in real time, the assembly of a series of polyoxometalate clusters {W12O42} ≡ {W12}, {W22O74} ≡ {W22} 1a, {W34O116} ≡ {W34} 2a, and {W36O120} ≡ {W36} 3a, using pH and ultraviolet-visible monitoring. The programmable networked reaction system reveals that is possible to assemble a range of different clusters using {W11}-based building blocks, demonstrating the relationship between the clusters within the family of iso-polyoxotungstates, with the final structural motif being entirely dependent on the building block libraries generated in each separate reaction space within the network. In total, this approach led to the isolation of five distinct inorganic clusters using a "fixed" set of reagents and using a fully automated sequence code, rather than five entirely different reaction protocols. As such, this approach allows us to discover, record, and implement complex one-pot reaction syntheses in a more general way, increasing the yield and reproducibility and potentially giving access to nonspecialists.

  14. Coding the Assembly of Polyoxotungstates with a Programmable Reaction System

    PubMed Central

    2017-01-01

    Chemical transformations are normally conducted in batch or flow mode, thereby allowing the chemistry to be temporally or spatially controlled, but these approaches are not normally combined dynamically. However, the investigation of the underlying chemistry masked by the self-assembly processes that often occur in one-pot reactions and exploitation of the potential of complex chemical systems requires control in both time and space. Additionally, maintaining the intermediate constituents of a self-assembled system “off equilibrium” and utilizing them dynamically at specific time intervals provide access to building blocks that cannot coexist under one-pot conditions and ultimately to the formation of new clusters. Herein, we implement the concept of a programmable networked reaction system, allowing us to connect discrete “one-pot” reactions that produce the building block{W11O38} ≡ {W11} under different conditions and control, in real time, the assembly of a series of polyoxometalate clusters {W12O42} ≡ {W12}, {W22O74} ≡ {W22} 1a, {W34O116} ≡ {W34} 2a, and {W36O120} ≡ {W36} 3a, using pH and ultraviolet–visible monitoring. The programmable networked reaction system reveals that is possible to assemble a range of different clusters using {W11}-based building blocks, demonstrating the relationship between the clusters within the family of iso-polyoxotungstates, with the final structural motif being entirely dependent on the building block libraries generated in each separate reaction space within the network. In total, this approach led to the isolation of five distinct inorganic clusters using a “fixed” set of reagents and using a fully automated sequence code, rather than five entirely different reaction protocols. As such, this approach allows us to discover, record, and implement complex one-pot reaction syntheses in a more general way, increasing the yield and reproducibility and potentially giving access to nonspecialists. PMID:28414229

  15. Impacts of Model Building Energy Codes

    SciTech Connect

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.; Liu, Bing; Bartlett, Rosemarie

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  16. The SEL macroscopic modeling code

    NASA Astrophysics Data System (ADS)

    Glasser, A. H.; Tang, X. Z.

    2004-12-01

    The SEL (Spectral ELement) macroscopic modeling code for magnetically confined plasma combines adaptive spectral element spatial discretization and nonlinearly implicit time stepping via Newton's method on massively parallel computers. Static condensation is implemented to construct the Shur complement of the Jacobian matrix, which greatly accelerates the linear system solution and distinguishes itself from conventional Newton-Krylov schemes. Grid alignment with the evolving magnetic field, implemented with a variational principle, is a key component of grid adaptation in SEL, and is critical to toroidal plasma applications. Results of 2D magnetic reconnection are shown to illustrate the accuracy and efficiency of the parallel algorithms built on the Portable, Extensible Toolkits for Scientific Computing (PETSC) framework.

  17. Dual-code quantum computation model

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  18. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  19. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  20. Efficiency of a model human image code

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  1. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  2. Genetic coding and gene expression - new Quadruplet genetic coding model

    NASA Astrophysics Data System (ADS)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  3. QGSM development for spallation reactions modeling

    NASA Astrophysics Data System (ADS)

    Baznat, M. I.; Chigrinov, S. E.; Gudima, K. K.

    2012-12-01

    The growing interest in spallation neutron sources, accelerator-driven systems, R&D of rare isotope beams, and development of external beam radiation therapy necessitated the improvement of nuclear reaction models for both stand-alone codes for the analysis of nuclear reactions and event generators within the Monte Carlo transport systems for calculations of interactions of high-energy particles with matter in a wide range of energy and in arbitrary 3D geometry of multicomponent targets. The exclusive approach to the description of nuclear reactions is the most effective for detailed calculation of inelastic interactions with atomic nuclei. It provides the correct description of particle production, single- and double-differential spectra, recoil, and fission product yields. This approach has been realized in the Quark Gluon String Model (QGSM) for nuclear reactions induced by photons, hadrons, and high energy heavy ions. In this article, improved versions of the QGSM model and a corresponding code have been developed tested and bench marked against experimental data for neutron production in spallation reactions on thin and thick targets in the energy range from a few MeV to several GeV/nucleon.

  4. Model Policy on Student Publications Code.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Education, Des Moines.

    In 1989, the Iowa Legislature created a new code section that defines and regulates student exercise of free expression in "official school publications." Also, the Iowa State Department of Education was directed to develop a model publication code that includes reasonable provisions for regulating the time, place, and manner of student…

  5. Transmutation Fuel Performance Code Thermal Model Verification

    SciTech Connect

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  6. Nuclear reaction modeling for energy applications

    NASA Astrophysics Data System (ADS)

    Kawano, Toshihiko; Talou, Patrick

    2008-10-01

    We discuss how nuclear reaction theories are utilized in the nuclear energy applications. The neutron-induced compound nuclear reactions, which take place from in the sub-eV energy range up to tens of MeV, are the most important mechanism to analyze the experimental data, to predict unknown reaction cross-sections, to evaluate the nuclear data for databases such as ENDF (Evaluated Nuclear Data File), and (4) to reduce the uncertainties. To improve the predictive-power of nuclear reaction theories in future, further development of compound nuclear reaction theories for fission and radiative capture processes is crucial, since these reaction cross sections are especially important for nuclear technology. An acceptable accuracy of these cross-sections has been achieved only if they were experimentally confirmed. However, the compound reaction theory is getting more important nowadays as many rare nuclides, such as americium, are involved in applications. We outline future challenges of nuclear reaction modeling in the GNASH/McGNASH code, which may yield great improvements in prediction of nuclear reaction cross-sections.

  7. Diagnosis code assignment: models and evaluation metrics.

    PubMed

    Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie

    2014-01-01

    The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20,533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art.

  8. Diagnosis code assignment: models and evaluation metrics

    PubMed Central

    Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie

    2014-01-01

    Background and objective The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. Methods We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. Results The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20 533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Conclusions Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art. PMID:24296907

  9. Generation of Java code from Alvis model

    NASA Astrophysics Data System (ADS)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  10. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam; Sundararaghavan, Veera

    2015-06-01

    In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.

  11. Stochastic Modeling Of Biochemical Reactions

    DTIC Science & Technology

    2006-11-01

    chemical reactions. Often for these reactions, the dynamics of the first M-order statistical moments of the species populations do not form a closed...results a stochastic model for gene expression is investigated. We show that in gene expression mechanisms , in which a protein inhibits its own...chemical reactions [7, 8, 4, 9, 10]. Since one is often interested in only the first and second order statistical moments for the number of molecules of

  12. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  13. Turbulent group reaction model of spray dryer

    SciTech Connect

    Ma, H.K.; Huang, H.S.; Chiu, H.H.

    1987-01-01

    A turbulent group reaction model consisting of several sub-models was developed for the prediction of SO/sub 2/ removal efficiency in spray dryers. Mathematical models are developed on the basis of Eulerian-type turbulent Navier-Stokes equations for both gas and condensed phases with interphase transport considerations. The group reaction number, G, is defined as the ratio of the SO/sub 2/ absorption rate to a reference convective mass flux. This number represents the fraction of SO/sub 2/ absorbed into the lime slurry. The model is incorporated into a computer code which permits the investigation of spray dryer design concepts and operating conditions. Hence, it provides a theoretical basis for spray dryer performance optimization and scale-up. This investigation can be a practical guide to achieve high SO/sub 2/ removal efficiency in a spray dryer.

  14. Fluid-Rock Interaction Models: Code Release and Results

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.

    2006-12-01

    Numerical models our group has developed for understanding the role of kinetic processes during fluid-rock interaction will be released free to the public. We will also present results that highlight the importance of kinetic processes. The author is preparing manuals describing the numerical methods used, as well as "how-to" guides for using the models. The release will include input files, full in-line code documentation of the FORTRAN source code, and instructions for use of model output for visualization and analysis. The aqueous phase (weathering) and supercritical (mixed-volatile metamorphic) fluid flow and reaction models for porous media will be released separately. These codes will be useful as teaching and research tools. The codes may be run on current generation personal computers. Although other codes are available for attacking some of the problems we address, unique aspects of our codes include sub-grid-scale grain models to track grain size changes, as well as dynamic porosity and permeability. Also, as the flow field can change significantly over the course of the simulation, efficient solution methods have been developed for the repeated solution of Poisson-type equations that arise from Darcy's law. These include sparse-matrix methods as well as the even more efficient spectral-transform technique. Results will be presented for kinetic control of reaction pathways and for heterogeneous media. Codes and documentation for modeling intra-grain diffusion of trace elements and isotopes, and exchange of these between grains and moving fluids will also be released. The unique aspect of this model is that it includes concurrent diffusion and grain growth or dissolution for multiple mineral types (low-diffusion regridding has been developed to deal with the moving-boundary problem at the fluid/mineral interface). Results for finite diffusion rates will be compared to batch and fractional melting models. Additional code and documentation will be released

  15. Modelling binary rotating stars by new population synthesis code bonnfires

    NASA Astrophysics Data System (ADS)

    Lau, H. H. B.; Izzard, R. G.; Schneider, F. R. N.

    2013-02-01

    bonnfires, a new generation of population synthesis code, can calculate nuclear reaction, various mixing processes and binary interaction in a timely fashion. We use this new population synthesis code to study the interplay between binary mass transfer and rotation. We aim to compare theoretical models with observations, in particular the surface nitrogen abundance and rotational velocity. Preliminary results show binary interactions may explain the formation of nitrogen-rich slow rotators and nitrogen-poor fast rotators, but more work needs to be done to estimate whether the observed frequencies of those stars can be matched.

  16. Reduction of chemical reaction models

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael

    1991-01-01

    An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.

  17. Propulsive Reaction Control System Model

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Phan, Linh H.; Serricchio, Frederick; San Martin, Alejandro M.

    2011-01-01

    This software models a propulsive reaction control system (RCS) for guidance, navigation, and control simulation purposes. The model includes the drive electronics, the electromechanical valve dynamics, the combustion dynamics, and thrust. This innovation follows the Mars Science Laboratory entry reaction control system design, and has been created to meet the Mars Science Laboratory (MSL) entry, descent, and landing simulation needs. It has been built to be plug-and-play on multiple MSL testbeds [analysis, Monte Carlo, flight software development, hardware-in-the-loop, and ATLO (assembly, test and launch operations) testbeds]. This RCS model is a C language program. It contains two main functions: the RCS electronics model function that models the RCS FPGA (field-programmable-gate-array) processing and commanding of the RCS valve, and the RCS dynamic model function that models the valve and combustion dynamics. In addition, this software provides support functions to initialize the model states, set parameters, access model telemetry, and access calculated thruster forces.

  18. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code...

  19. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code...

  20. NUCLEAR REACTION MODELING FOR RIA ISOL TARGET DESIGN

    SciTech Connect

    S. MASHNIK; ET AL

    2001-03-01

    Los Alamos scientists are collaborating with researchers at Argonne and Oak Ridge on the development of improved nuclear reaction physics for modeling radionuclide production in ISOL targets. This is being done in the context of the MCNPX simulation code, which is a merger of MCNP and the LAHET intranuclear cascade code, and simulates both nuclear reaction cross sections and radiation transport in the target. The CINDER code is also used to calculate the time-dependent nuclear decays for estimating induced radioactivities. They give an overview of the reaction physics improvements they are addressing, including intranuclear cascade (INC) physics, where recent high-quality inverse-kinematics residue data from GSI have led to INC spallation and fission model improvements; and preequilibrium reactions important in modeling (p,xn) and (p,xnyp) cross sections for the production of nuclides far from stability.

  1. Dynamical model of surrogate reactions

    SciTech Connect

    Aritomo, Y.; Chiba, S.; Nishio, K.

    2011-08-15

    A new dynamical model is developed to describe the whole process of surrogate reactions: Transfer of several nucleons at an initial stage, thermal equilibration of residues leading to washing out of shell effects, and decay of populated compound nuclei are treated in a unified framework. Multidimensional Langevin equations are employed to describe time evolution of collective coordinates with a time-dependent potential energy surface corresponding to different stages of surrogate reactions. The new model is capable of calculating spin distributions of the compound nuclei, one of the most important quantities in the surrogate technique. Furthermore, various observables of surrogate reactions can be calculated, for example, energy and angular distribution of ejectile and mass distributions of fission fragments. These features are important to assess validity of the proposed model itself, to understand mechanisms of the surrogate reactions, and to determine unknown parameters of the model. It is found that spin distributions of compound nuclei produced in {sup 18}O+{sup 238}U{yields}{sup 16}O+{sup 240}*U and {sup 18}O+{sup 236}U{yields}{sup 16}O+{sup 238}*U reactions are equivalent and much less than 10({h_bar}/2{pi}) and therefore satisfy conditions proposed by Chiba and Iwamoto [Phys. Rev. C 81, 044604 (2010)] if they are used as a pair in the surrogate ratio method.

  2. PP: A graphics post-processor for the EQ6 reaction path code

    SciTech Connect

    Stockman, H.W.

    1994-09-01

    The PP code is a graphics post-processor and plotting program for EQ6, a popular reaction-path code. PP runs on personal computers, allocates memory dynamically, and can handle very large reaction path runs. Plots of simple variable groups, such as fluid and solid phase composition, can be obtained with as few as two keystrokes. Navigation through the list of reaction path variables is simple and efficient. Graphics files can be exported for inclusion in word processing documents and spreadsheets, and experimental data may be imported and superposed on the reaction path runs. The EQ6 thermodynamic database can be searched from within PP, to simplify interpretation of complex plots.

  3. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  4. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  5. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  6. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  7. 28 CFR 36.608 - Guidance concerning model codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608... Codes § 36.608 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  8. Rapid installation of numerical models in multiple parent codes

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  9. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  10. Clinical coding of prospectively identified paediatric adverse drug reactions--a retrospective review of patient records.

    PubMed

    Bellis, Jennifer R; Kirkham, Jamie J; Nunn, Anthony J; Pirmohamed, Munir

    2014-12-17

    National Health Service (NHS) hospitals in the UK use a system of coding for patient episodes. The coding system used is the International Classification of Disease (ICD-10). There are ICD-10 codes which may be associated with adverse drug reactions (ADRs) and there is a possibility of using these codes for ADR surveillance. This study aimed to determine whether ADRs prospectively identified in children admitted to a paediatric hospital were coded appropriately using ICD-10. The electronic admission abstract for each patient with at least one ADR was reviewed. A record was made of whether the ADR(s) had been coded using ICD-10. Of 241 ADRs, 76 (31.5%) were coded using at least one ICD-10 ADR code. Of the oncology ADRs, 70/115 (61%) were coded using an ICD-10 ADR code compared with 6/126 (4.8%) non-oncology ADRs (difference in proportions 56%, 95% CI 46.2% to 65.8%; p < 0.001). The majority of ADRs detected in a prospective study at a paediatric centre would not have been identified if the study had relied on ICD-10 codes as a single means of detection. Data derived from administrative healthcare databases are not reliable for identifying ADRs by themselves, but may complement other methods of detection.

  11. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  12. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... Chapter 3. (e) Materials standards Chapter 26. (f) Construction components Part III. (g) Glass Chapter 2... dwellings (NFPA 70A-1990)....

  13. Modelling Students' Visualisation of Chemical Reaction

    ERIC Educational Resources Information Center

    Cheng, Maurice M. W.; Gilbert, John K.

    2017-01-01

    This paper proposes a model-based notion of "submicro representations of chemical reactions". Based on three structural models of matter (the simple particle model, the atomic model and the free electron model of metals), we suggest there are two major models of reaction in school chemistry curricula: (a) reactions that are simple…

  14. Tokamak Simulation Code modeling of NSTX

    SciTech Connect

    S.C. Jardin; S. Kaye; J. Menard; C. Kessel; A.H. Glasser

    2000-07-20

    The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption.

  15. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  16. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  17. Modeling the complex bromate-iodine reaction.

    PubMed

    Machado, Priscilla B; Faria, Roberto B

    2009-05-07

    In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model.

  18. Population Coding of Visual Space: Modeling

    PubMed Central

    Lehky, Sidney R.; Sereno, Anne B.

    2011-01-01

    We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012

  19. Creating Models for the ORIGEN Codes

    NASA Astrophysics Data System (ADS)

    Louden, G. D.; Mathews, K. A.

    1997-10-01

    Our research focused on the development of a methodology for creating reactor-specific cross-section libraries for nuclear reactor and nuclear fuel cycle analysis codes available from the Radiation Safety Information Computational Center. The creation of problem-specific models allows more detailed anlaysis than is possible using the generic models provided with ORIGEN2 and ORIGEN-S. A model of the Ohio State University Research Reactor was created using the Coupled 1-D Shielding Analysis (SAS2H) module of the Modular Code System for Performing Standardized Computer Analysis for Licensing Evaluation (SCALE4.3). Six different reactor core models were compared to identify the effect of changing the SAS2H Larger Unit Cell on the predicted isotopic composition of spent fuel. Seven different power histories were then applied to a Core-Average model to determine the ability of ORIGEN-S to distinguish spent fuel produced under varying operating conditions. Several actinide and fission product concentrations were identified which were sensitive to the power history, however the majority of the isotope concentrations were not dependent on operating history.

  20. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.

  1. Numerical modelling of hydration reactions

    NASA Astrophysics Data System (ADS)

    Vrijmoed, Johannes C.; John, Timm

    2017-04-01

    Mineral reactions are generally accompanied by volume changes. Observations in rocks and thin section indicate that this often occurred by replacement reactions involving a fluid phase. Frequently, the volume of the original rock or mineral seems to be conserved. If the density of the solid reaction products is higher than the reactants, the associated solid volume decrease generates space for a fluid phase. In other words, porosity is created. The opposite is true for an increase in solid volume during reaction, which leads to a porosity reduction. This slows down and may even stop the reaction if it needs fluid as a reactant. Understanding the progress of reactions and their rates is important because reaction generally changes geophysical and rock mechanical properties which will therefore affect geodynamical processes and seismic properties. We studied the case of hydration of eclogite to blueschist in a subduction zone setting. Eclogitized pillow basalt structures from the Tian-Shan orogeny are transformed to blueschist on the rims of the pillow (van der Straaten et al., 2008). Fluid pathways existed between the pillow structures. The preferred hypothesis of blueschist formation is to supply the fluid for hydration from the pillow margins progressing inward. Using numerical modelling we simulate this coupled reaction-diffusion process. Porosity and fluid pressure evolution are coupled to local thermodynamic equilibrium and density changes. The first rim of blueschist that forms around the eclogite pillow increases volume to such a degree that the system is clogged and the reaction stops. Nevertheless, the field evidence suggests the blueschist formation continued. To prevent the system from clogging, a high incoming pore fluid pressure on the pillow boundaries is needed along with removal of mass from the system to accommodate the volume changes. The only other possibility is to form blueschist from any remaining fluid stored in the core of the pillow

  2. The Spatial Coding Model of Visual Word Identification

    ERIC Educational Resources Information Center

    Davis, Colin J.

    2010-01-01

    Visual word identification requires readers to code the identity and order of the letters in a word and match this code against previously learned codes. Current models of this lexical matching process posit context-specific letter codes in which letter representations are tied to either specific serial positions or specific local contexts (e.g.,…

  3. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  4. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National...

  5. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  6. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National...

  7. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  8. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  9. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of 64Cu and 67Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    NASA Astrophysics Data System (ADS)

    Nasrabadi, M. N.; Sepiani, M.

    2015-03-01

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  10. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    SciTech Connect

    Nasrabadi, M. N. Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  11. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  12. Direct containment heating models in the CONTAIN code

    SciTech Connect

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  13. Computerized reduction of elementary reaction sets for combustion modeling

    NASA Technical Reports Server (NTRS)

    Wikstrom, Carl V.

    1991-01-01

    If the entire set of elementary reactions is to be solved in the modeling of chemistry in computational fluid dynamics, a set of stiff ordinary differential equations must be integrated. Some of the reactions take place at very high rates, requiring short time steps, while others take place more slowly and make little progress in the short time step integration. The goal is to develop a procedure to automatically obtain sets of finite rate equations, consistent with a partial equilibrium assumptions, from an elementary set appropriate to local conditions. The possibility of computerized reaction reduction was demonstrated. However, the ability to use the reduced reaction set depends on the ability of the CFD approach in incorporate partial equilibrium calculations into the computer code. Therefore, the results should be tested on a code with partial equilibrium capability.

  14. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  15. ER@CEBAF: Modeling code developments

    SciTech Connect

    Meot, F.; Roblin, Y.

    2016-04-13

    A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  16. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  17. Code C# for chaos analysis of relativistic many-body systems with reactions

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.

    2012-04-01

    In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions

  18. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  19. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  20. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  1. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  2. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  3. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE PAGES

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.; ...

    2013-11-08

    Proton double-differential cross sections from 59Co(α,p)62Ni, 57Fe(α,p)60Co, 56Fe(7Li,p)62Ni, and 55Mn(6Li,p)60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtained with a Monte Carlo technique. Furthermore,more » excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  4. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    NASA Astrophysics Data System (ADS)

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Bürger, A.; Görgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.

    2013-11-01

    Proton double-differential cross sections from 59Co(α,p)62Ni, 57Fe(α,p)60Co, 56Fe(7Li,p)62Ni, and 55Mn(6Li,p)60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys.0008-420410.1139/p65-139 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtained with a Monte Carlo technique. Excitation energy dependencies were found to be inconsistent with the Fermi-gas model.

  5. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR...

  6. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    SciTech Connect

    Plante, Ianik; Devroye, Luc

    2015-09-15

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  7. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2015-09-01

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  8. A secure arithmetic coding based on Markov model

    NASA Astrophysics Data System (ADS)

    Duan, Lili; Liao, Xiaofeng; Xiang, Tao

    2011-06-01

    We propose a modification of the standard arithmetic coding that can be applied to multimedia coding standards at entropy coding stage. In particular, we introduce a randomized arithmetic coding scheme based on order-1 Markov model that achieves encryption by scrambling the symbols' order in the model and choosing the relevant order's probability randomly, which is done with higher compression efficiency and good security. Experimental results and security analyses indicate that the algorithm can not only resist to existing attacks based on arithmetic coding, but also be immune to other cryptanalysis.

  9. Evaluation of reaction rates in streambed sediments with seepage flow: a novel code

    NASA Astrophysics Data System (ADS)

    Boano, Fulvio; De Falco, Natalie; Arnon, Shai

    2015-04-01

    Streambed interfaces represent hotspots for nutrient transformations because they host different microbial species which perform many heterotrophic and autotrophic reactions. The evaluation of these reaction rates is crucial to assess the fate of nutrients in riverine environments, and it is often performed through the analysis of concentrations from water samples collected along vertical profiles. The most commonly employed evaluation tool is the Profile code developed by Berg et al. (1998), which determines reaction rates by fitting observed concentrations to a diffusion-reaction equation that neglects the presence of water flow within sediments. However, hyporheic flow is extremely common in streambeds, where solute transport is often controlled by advection rather than diffusion. There is hence a pressing need to develop new methods that can be applied even to advection-dominated sediments. This contribution fills this gap by presenting a novel approach that extends the method proposed by Berg et al. (1998). This new approach includes the influence of vertical solute transport by upwelling or downwelling water, and it is this suited to the typical flow conditions of stream sediments. The code is applied to vertical profiles of dissolved oxygen from a laboratory flume designed to mimic the complex flow conditions of real streams. The results show that it is fundamental to consider water flow to obtain reliable estimates of reaction rates in streambeds. Berg, P., N. Risgaard-Petersen, and S. Rysgaard, 1998, Interpretation of measured concentration profiles in the sediment porewater, Limnology and Oceanography, 43:1500-1510.

  10. Minimum cost model energy code envelope requirements

    SciTech Connect

    Connor, C.C.; Lucas, R.G.; Turchen, S.J.

    1994-08-01

    This paper describes the analysis underlying development of the U.S. Department of Energy`s proposed revisions of the Council of American Building Officials (CABO) 1993 Model Energy Code (MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. This analysis resulted in revised MEC envelope conservation levels based on an objective methodology that determined the minimum-cost combination of energy efficiency measures (EEMs) for residences in different locations around the United States. The proposed MEC revision resulted from a cost-benefit analysis from the consumer`s perspective. In this analysis, the costs of the EEMs were balanced against the benefit of energy savings. Detailed construction, financial, economic, and fuel cost data were compiled, described in a technical support document, and incorporated in the analysis. A cost minimization analysis was used to compare the present value of the total long-nm costs for several alternative EEMs and to select the EEMs that achieved the lowest cost for each location studied. This cost minimization was performed for 881 cities in the United States, and the results were put into the format used by the MEC. This paper describes the methodology for determining minimum-cost energy efficiency measures for ceilings, walls, windows, and floors and presents the results in the form of proposed revisions to the MEC. The proposed MEC revisions would, on average, increase the stringency of the MEC by about 10%.

  11. Genetic code: an alternative model of translation.

    PubMed

    Damjanović, Zvonimir M; Rakocević, Miloje M

    2005-06-01

    Our earlier studies of translation have led us to a specific numeric coding of nucleotides (A = 0, C = 1, G = 2, and U = 3)--that is, a quaternary numeric system; to ordering of digrams and codons (read right to left: .yx and Z.yx) as ordinal numbers from 000 to 111; and to seek hypothetic transformation of mRNA to 20 canonic amino acids. In this work, we show that amino acids match the ordinal number--that is, follow as transforms of their respective digrams and/or mRNA-codons. Sixteen digrams and their respective amino acids appear as a parallel (discrete) array. A first approximation of translation in this view is demonstrated by a "twisted" spiral on the side of "phantom" codons and by ordering amino acids in the form of a cross on the other side, whereby the transformation of digrams and/or phantom codons to amino acids appears to be one-to-one! Classification of canonical amino acids derived from our dynamic model clarifies physicochemical criteria, such as purinity, pyrimidinity, and particularly codon rules. The system implies both the rules of Siemion and Siemion and of Davidov, as well as balances of atomic and nucleon numbers within groups of amino acids. Formalization in this system offers the possibility of extrapolating backward to the initial organization of heredity.

  12. Astrophysical Plasmas: Codes, Models, and Observations

    NASA Astrophysics Data System (ADS)

    Canto, Jorge; Rodriguez, Luis F.

    2000-05-01

    The conference Astrophysical Plasmas: Codes, Models, and Observations was aimed at discussing the most recent advances, arid some of the avenues for future work, in the field of cosmical plasmas. It was held (hiring the week of October 25th to 29th 1999, at the Centro Nacional de las Artes (CNA) in Mexico City, Mexico it modern and impressive center of theaters and schools devoted to the performing arts. This was an excellent setting, for reviewing the present status of observational (both on earth and in space) arid theoretical research. as well as some of the recent advances of laboratory research that are relevant, to astrophysics. The demography of the meeting was impressive: 128 participants from 12 countries in 4 continents, a large fraction of them, 29% were women and most of them were young persons (either recent Ph.Ds. or graduate students). This created it very lively and friendly atmosphere that made it easy to move from the ionization of the Universe and high-redshift absorbers, to Active Galactic Nucleotides (AGN)s and X-rays from galaxies, to the gas in the Magellanic Clouds and our Galaxy, to the evolution of H II regions and Planetary Nebulae (PNe), and to the details of plasmas in the Solar System and the lab. All these topics were well covered with 23 invited talks, 43 contributed talks. and 22 posters. Most of them are contained in these proceedings, in the same order of the presentations.

  13. Spectral algorithm modifications for (alpha,n) reactions on beryllium in the SOURCES computer code

    NASA Astrophysics Data System (ADS)

    Shores, Erik Frederick

    Energy spectra calculations have been refined for the low energy (e.g. <2.5 MeV) neutron continuum produced in the alpha + Be reaction. As expected, new distributions shift neutrons to maxima <1 MeV, and support previous assertions that the continuum is the result of a sequential three-body decay process. Appropriate kinematic descriptions were added to the SOURCES computer code thus providing for the first time a simple algorithm to include a break up reaction. Difficulties associated with spectral calculations are discussed and three calculations are compared to experimental data for AmBe, PuBe, and PoBe neutron sources. This work represents a new tool to predict neutron energy spectra in the lower energy regime and, upon submission to the Radiation Safety Information Computational Center, the code distributor, will ultimately upgrade SOURCES from version 4C to 5A.

  14. Review and verification of CARE 3 mathematical model and code

    NASA Technical Reports Server (NTRS)

    Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.

    1983-01-01

    The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.

  15. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  16. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  17. Numerical MHD codes for modeling astrophysical flows

    NASA Astrophysics Data System (ADS)

    Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.

    2016-05-01

    We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.

  18. A draft model aggregated code of ethics for bioethicists.

    PubMed

    Baker, Robert

    2005-01-01

    Bioethicists function in an environment in which their peers--healthcare executives, lawyers, nurses, physicians--assert the integrity of their fields through codes of professional ethics. Is it time for bioethics to assert its integrity by developing a code of ethics? Answering in the affirmative, this paper lays out a case by reviewing the historical nature and function of professional codes of ethics. Arguing that professional codes are aggregative enterprises growing in response to a field's historical experiences, it asserts that bioethics now needs to assert its integrity and independence and has already developed a body of formal statements that could be aggregated to create a comprehensive code of ethics for bioethics. A Draft Model Aggregated Code of Ethics for Bioethicists is offered in the hope that analysis and criticism of this draft code will promote further discussion of the nature and content of a code of ethics for bioethicists.

  19. Status report on the THROHPUT transient heat pipe modeling code

    SciTech Connect

    Hall, M.L.; Merrigan, M.A.; Reid, R.S.

    1993-11-01

    Heat pipes are structures which transport heat by the evaporation and condensation of a working fluid, giving them a high effective thermal conductivity. Many space-based uses for heat pipes have been suggested, and high temperature heat pipes using liquid metals as working fluids are especially attractive for these purposes. These heat pipes are modeled by the THROHPUT code (THROHPUT is an acronym for Thermal Hydraulic Response Of Heat Pipes Under Transients and is pronounced like ``throughput``). Improvements have been made to the THROHPUT code which models transient thermohydraulic heat pipe behavior. The original code was developed as a doctoral thesis research code by Hall. The current emphasis has been shifted from research into the numerical modeling to the development of a robust production code. Several modeling obstacles that were present in the original code have been eliminated, and several additional features have been added.

  20. Early applications of the R-matrix SAMMY code for charged-particle induced reactions and related covariances

    NASA Astrophysics Data System (ADS)

    Pigni, Marco T.; Gauld, Ian C.; Croft, Stephen

    2017-09-01

    The SAMMY code system is mainly used in nuclear data evaluations for incident neutrons in the resolved resonance region (RRR), however, built-in capabilities also allow the code to describe the resonance structure produced by other incident particles, including charged particles. (α,n) data provide fundamental information that underpins nuclear modeling and simulation software, such as ORIGEN and SOURCES4C, used for the analysis of neutron emission and definition of source emission processes. The goal of this work is to carry out evaluations of charged-particle-induced reaction cross sections in the RRR. The SAMMY code was recently used in this regard to generate a Reich-Moore parameterization of the available 17,18O(α,n) experimental cross sections in order to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types. This paper provides a brief description of the SAMMY evaluation procedure for the treatment of 17,18O(α,n) reaction cross sections. The results are used to generate neutron source rates for a plutonium oxide matrix.

  1. Code and model extensions of the THATCH code for modular high temperature gas-cooled reactors

    SciTech Connect

    Kroger, P.G.; Kennett, R.J. )

    1993-05-01

    This report documents several model extensions and improvements of the THATCH code, a code to model thermal and fluid flow transients in High Temperature Gas-Cooled Reactors. A heat exchanger model was added, which can be used to represent the steam generator of the main Heat Transport System or the auxiliary Shutdown Cooling System. This addition permits the modeling of forced flow cooldown transients with the THATCH code. An enhanced upper head model, considering the actual conical and spherical shape of the upper plenum and reactor upper head was added, permitting more accurate modeling of the heat transfer in thisregion. The revised models are described, and the changes and addition to the input records are documented.

  2. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  3. Energy standards and model codes development, adoption, implementation, and enforcement

    SciTech Connect

    Conover, D.R.

    1994-08-01

    This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.

  4. Three Dimensional Thermal Abuse Reaction Model for Lithium Ion Batteries

    SciTech Connect

    and Ahmad Pesaran, Gi-Heon Kim

    2006-06-29

    Three dimensional computer models for simulating thermal runaway of lithium ion battery was developed. The three-dimensional model captures the shapes and dimensions of cell components and the spatial distributions of materials and temperatures, so we could consider the geometrical features, which are critical especially in large cells. An array of possible exothermic reactions, such as solid-electrolyte-interface (SEI) layer decomposition, negative active/electrolyte reaction, and positive active/electrolyte reaction, were considered and formulated to fit experimental data from accelerating rate calorimetry and differential scanning calorimetry. User subroutine code was written to implement NREL developed approach and to utilize a commercially available solver. The model is proposed to use for simulation a variety of lithium-ion battery safety events including thermal heating and short circuit.

  5. Secondary neutron source modelling using MCNPX and ALEPH codes

    NASA Astrophysics Data System (ADS)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  6. Modeling shock-driven reaction in low density PMDI foam

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Alexander, C. Scott; Reinhart, William; Peterson, David

    Shock experiments on low density polyurethane foams reveal evidence of reaction at low impact pressures. However, these reaction thresholds are not evident over the low pressures reported for historical Hugoniot data of highly distended polyurethane at densities below 0.1 g/cc. To fill this gap, impact data given in a companion paper for polymethylene diisocyanate (PMDI) foam with a density of 0.087 g/cc were acquired for model validation. An equation of state (EOS) was developed to predict the shock response of these highly distended materials over the full range of impact conditions representing compaction of the inert material, low-pressure decomposition, and compression of the reaction products. A tabular SESAME EOS of the reaction products was generated using the JCZS database in the TIGER equilibrium code. In particular, the Arrhenius Burn EOS, a two-state model which transitions from an unreacted to a reacted state using single step Arrhenius kinetics, as implemented in the shock physics code CTH, was modified to include a statistical distribution of states. Hence, a single EOS is presented that predicts the onset to reaction due to shock loading in PMDI-based polyurethane foams. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.

  7. Capture and documentation of coded data on adverse drug reactions: an overview.

    PubMed

    Paul, Lindsay; Robinson, Kerin M

    2012-01-01

    Allergic responses to prescription drugs are largely preventable, and incur significant cost to the community both financially and in terms of healthcare outcomes. The capacity to minimise the effects of repeated events rests predominantly with the reliability of allergy documentation in medical records and computerised physician order entry systems (CPOES) with decision support such as allergy alerts. This paper presents an overview of the nature and extent of adverse drug reactions (ADRs) in Australia and other developed countries, a discussion and evaluation of strategies which have been devised to address this issue, and a commentary on the role of coded data in informing this patient safety issue. It is not concerned with pharmacovigilance systems that monitor ADRs on a global scale. There are conflicting reports regarding the efficacy of these strategies. Although in many cases allergy alerts are effective, lack of sensitivity and contextual relevance can often induce doctors to override alerts. Human factors such as user fatigue and inadequate adverse drug event reporting, including ADRs, are commonplace. The quality of and response to allergy documentation can be enhanced by the participation of nurses and pharmacists, particularly in medication reconciliation. The International Classification of Diseases (ICD) coding of drug allergies potentially yields valuable evidence, but the quality of local and national level coded data is hampered by under-documenting and under-coding.

  8. Connectionist and diffusion models of reaction time.

    PubMed

    Ratcliff, R; Van Zandt, T; McKoon, G

    1999-04-01

    Two connectionist frameworks, GRAIN (J. L. McClelland, 1993) and brain-state-in-a-box (J. A. Anderson, 1991), and R. Ratcliff's (1978) diffusion model were evaluated using data from a signal detection task. Dependent variables included response probabilities, reaction times for correct and error responses, and shapes of reaction-time distributions. The diffusion model accounted for all aspects of the data, including error reaction times that had previously been a problem for all response-time models. The connectionist models accounted for many aspects of the data adequately, but each failed to a greater or lesser degree in important ways except for one model that was similar to the diffusion model. The findings advance the development of the diffusion model and show that the long tradition of reaction-time research and theory is a fertile domain for development and testing of connectionist assumptions about how decisions are generated over time.

  9. Chemical-reaction model for Mexican wave

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    2003-05-01

    We present a chemical-reaction model to describe the Mexican wave ( La Ola) in football stadia. The spectator's action is described in terms of chemical reactions. The model is governed by three reaction rates k 1, k 2, and k3. We study the nonlinear waves on one- and two-dimensional lattices. The Mexican wave is formulated as a clockwise forwardly propagating wave. Waves are growing or disappear, depending on the values of reaction rates. In the specific case of k1= k2= k3=1, the nonlinear-wave equation produces a propagating pulse like soliton.

  10. A Networks Approach to Modeling Enzymatic Reactions.

    PubMed

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.

  11. Automatic code generation from the OMT-based dynamic model

    SciTech Connect

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  12. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  13. A model for reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Chinitz, W.; Evans, J. S.

    1984-01-01

    To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

  14. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  15. A model for astrophysical spallation reactions

    NASA Technical Reports Server (NTRS)

    Schmitt, W. F.; Ayres, C. L.; Merker, M.; Shen, B. S. P.

    1974-01-01

    A Monte-Carlo model (RENO) for spallation reactions is described which can treat both the spallations induced by a free nucleon and those induced by a complex nucleus. It differs from other such models in that it employs a discrete-nucleon representation of the nucleus and allows clusters of nucleons to form and to participate in the reaction. The RENO model is particularly suited for spallations involving the relatively light nuclei of astrophysical and cosmic-ray interest.

  16. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    SciTech Connect

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  17. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    DOE PAGES

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  18. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.

  19. Conversations about Code-Switching: Contrasting Ideologies of Purity and Authenticity in Basque Bilinguals' Reactions to Bilingual Speech

    ERIC Educational Resources Information Center

    Lantto, Hanna

    2016-01-01

    This study examines the manifestations of purity and authenticity in 47 Basque bilinguals' reactions to code-switching. The respondents listened to two speech extracts with code-switching, filled in a short questionnaire and talked about the extracts in small groups. These conversations were then recorded. The respondents' beliefs can be…

  20. Conversations about Code-Switching: Contrasting Ideologies of Purity and Authenticity in Basque Bilinguals' Reactions to Bilingual Speech

    ERIC Educational Resources Information Center

    Lantto, Hanna

    2016-01-01

    This study examines the manifestations of purity and authenticity in 47 Basque bilinguals' reactions to code-switching. The respondents listened to two speech extracts with code-switching, filled in a short questionnaire and talked about the extracts in small groups. These conversations were then recorded. The respondents' beliefs can be…

  1. Modeling of turbulent chemical reaction

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1995-01-01

    Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.

  2. Cavitation Modeling in Euler and Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  3. Two-dimensional MHD generator model. [GEN code

    SciTech Connect

    Geyer, H. K.; Ahluwalia, R. K.; Doss, E. D.

    1980-09-01

    A steady state, two-dimensional MHD generator code, GEN, is presented. The code solves the equations of conservation of mass, momentum, and energy, using a Von Mises transformation and a local linearization of the equations. By splitting the source terms into a part proportional to the axial pressure gradient and a part independent of the gradient, the pressure distribution along the channel is easily obtained to satisfy various criteria. Thus, the code can run effectively in both design modes, where the channel geometry is determined, and analysis modes, where the geometry is previously known. The code also employs a mixing length concept for turbulent flows, Cebeci and Chang's wall roughness model, and an extension of that model to the effective thermal diffusities. Results on code validation, as well as comparisons of skin friction and Stanton number calculations with experimental results, are presented.

  4. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  5. Animal models of idiosyncratic drug reactions.

    PubMed

    Ng, Winnie; Lobach, Alexandra R M; Zhu, Xu; Chen, Xin; Liu, Feng; Metushi, Imir G; Sharma, Amy; Li, Jinze; Cai, Ping; Ip, Julia; Novalen, Maria; Popovic, Marija; Zhang, Xiaochu; Tanino, Tadatoshi; Nakagawa, Tetsuya; Li, Yan; Uetrecht, Jack

    2012-01-01

    If we could predict and prevent idiosyncratic drug reactions (IDRs) it would have a profound effect on drug development and therapy. Given our present lack of mechanistic understanding, this goal remains elusive. Hypothesis testing requires valid animal models with characteristics similar to the idiosyncratic reactions that occur in patients. Although it has not been conclusively demonstrated, it appears that almost all IDRs are immune-mediated, and a dominant characteristic is a delay between starting the drug and the onset of the adverse reaction. In contrast, most animal models are acute and therefore involve a different mechanism than idiosyncratic reactions. There are, however, a few animal models such as the nevirapine-induced skin rash in rats that have characteristics very similar to the idiosyncratic reaction that occurs in humans and presumably have a very similar mechanism. These models have allowed testing hypotheses that would be impossible to test in any other way. In addition there are models in which there is a delayed onset of mild hepatic injury that resolves despite continued treatment similar to the "adaptation" reactions that are more common than severe idiosyncratic hepatotoxicity in humans. This probably represents the development of immune tolerance. However, most attempts to develop animal models by stimulating the immune system have been failures. A specific combination of MHC and T cell receptor may be required, but it is likely more complex. Animal studies that determine the requirements for an immune response would provide vital clues about risk factors for IDRs in patients.

  6. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Workman Mill Road, Whittier, California 90601. (2) National Electrical Code, NFPA 70, 1993 Edition... energy requirements for multifamily or care-type structures; and (iii) Those provisions of the model...

  7. EM modeling for GPIR using 3D FDTD modeling codes

    SciTech Connect

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  8. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  9. Fire modeling code comparisons. Final report

    SciTech Connect

    Mowrer, F.W.; Gautier, B.

    1998-09-01

    There is a significant effort taking place in the US nuclear power industry to provide an option for risk-informed/performance-based fire protection programs. One of the requirements for such a program is the ability to deterministically model the characteristics and consequences of a postulated fire in terms of initiation, growth and propagation. There are two general classifications of methods to accomplish this, namely computational fluid dynamics models and the more simplified zone models. For many applications, zone models will provide adequate results. Zone models have been used for probabilistic risk assessments and fire hazard analyses. However, a lack of comparative verification and established confidence in the results has limited their application. This report compares the features and capabilities of four zone type fire models: FIVE, CFAST, COMPBRNIIIe, and MAGIC. The main features of the models are documented in matrix form. The model are benchmarked against three series of existing large-scale fire tests.

  10. Quantization and psychoacoustic model in audio coding in advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2011-10-01

    This paper presents complete optimized architecture of Advanced Audio Coder quantization with Huffman coding. After that psychoacoustic model theory is presented and few algorithms described: standard Two Loop Search, its modifications, Genetic, Just Noticeable Level Difference, Trellis-Based and its modification: Cascaded Trellis-Based Algorithm.

  11. Development of the Automatic Modeling System for Reaction Mechanisms Using REX+JGG

    NASA Astrophysics Data System (ADS)

    Takahashi, Takahiro; Kawai, Kohei; Nakai, Hiroyuki; Ema, Yoshinori

    The identification of appropriate reaction models is very helpful for developing chemical vapor deposition (CVD) processes. In this study, we developed an automatic modeling system that analyzes experimental data on the cross- sectional shapes of films deposited on substrates with nanometer- or micrometer-sized trenches. The system then identifies a suitable reaction model to describe the film deposition. The inference engine used by the system to model the reaction mechanism was designed using real-coded genetic algorithms (RCGAs): a generation alternation model named "just generation gap" (JGG) and a real-coded crossover named "real-coded ensemble crossover" (REX). We studied the effect of REX+JGG on the system's performance, and found that the system with REX+JGG was the most accurate and reliable at model identification among the algorithms that we studied.

  12. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    SciTech Connect

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  13. A unified model of the standard genetic code

    PubMed Central

    Morgado, Eberto R.

    2017-01-01

    The Rodin–Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model. PMID:28405378

  14. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  15. RHOCUBE: 3D density distributions modeling code

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  16. Quasiglobal reaction model for ethylene combustion

    NASA Technical Reports Server (NTRS)

    Singh, D. J.; Jachimowski, Casimir J.

    1994-01-01

    The objective of this study is to develop a reduced mechanism for ethylene oxidation. The authors are interested in a model with a minimum number of species and reactions that still models the chemistry with reasonable accuracy for the expected combustor conditions. The model will be validated by comparing the results to those calculated with a detailed kinetic model that has been validated against the experimental data.

  17. Quasiglobal reaction model for ethylene combustion

    NASA Technical Reports Server (NTRS)

    Singh, D. J.; Jachimowski, Casimir J.

    1994-01-01

    The objective of this study is to develop a reduced mechanism for ethylene oxidation. The authors are interested in a model with a minimum number of species and reactions that still models the chemistry with reasonable accuracy for the expected combustor conditions. The model will be validated by comparing the results to those calculated with a detailed kinetic model that has been validated against the experimental data.

  18. Feasibility study of nuclear transmutation by negative muon capture reaction using the PHITS code

    NASA Astrophysics Data System (ADS)

    Abe, Shin-ichiro; Sato, Tatsuhiko

    2016-06-01

    Feasibility of nuclear transmutation of fission products in high-level radioactive waste by negative muon capture reaction is investigated using the Particle and Heave Ion Transport code System (PHITS). It is found that about 80 % of stopped negative muons contribute to transmute target nuclide into stable or short-lived nuclide in the case of 135Cs, which is one of the most important nuclide in the transmutation. The simulation result also indicates that the position of transmutation is controllable by changing the energy of incident negative muon. Based on our simulation, it takes approximately 8.5 × 108years to transmute 500 g of 135Cs by negative muon beam with the highest intensity currently available.

  19. Deuteron induced reactions on Ho and La: Experimental excitation functions and comparison with code results

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Adam-Rebeles, R.; Tarkanyi, F.; Takacs, S.; Csikai, J.; Takacs, M. P.; Ignatyuk, A.

    2013-09-01

    Activation products of rare earth elements are gaining importance in medical and technical applications. In stacked foil irradiations, followed by high resolution gamma spectroscopy, the cross-sections for production of 161,165Er, 166gHo on 165Ho and 135,137m,137g,139Ce, 140La, 133m,133g,cumBa and 136Cs on natLa targets were measured up to 50 MeV. Reduced uncertainty is obtained by simultaneous remeasurement of the 27Al(d,x)24,22Na monitor reactions over the whole energy range. A comparison with experimental literature values and results from updated theoretical codes (ALICE-D, EMPIRE-D and the TENDL2012 online library) is discussed.

  20. Spin-glass models as error-correcting codes

    NASA Astrophysics Data System (ADS)

    Sourlas, Nicolas

    1989-06-01

    DURING the transmission of information, errors may occur because of the presence of noise, such as thermal noise in electronic signals or interference with other sources of radiation. One wants to recover the information with the minimum error possible. In theory this is possible by increasing the power of the emitter source. But as the cost is proportional to the energy fed into the channel, it costs less to code the message before sending it, thus including redundant 'coding' bits, and to decode at the end. Coding theory provides rigorous bounds on the cost-effectiveness of any code. The explicit codes proposed so far for practical applications do not saturate these bounds; that is, they do not achieve optimal cost-efficiency. Here we show that theoretical models of magnetically disordered materials (spin glasses) provide a new class of error-correction codes. Their cost performance can be calculated using the methods of statistical mechanics, and is found to be excellent. These models can, under certain circumstances, constitute the first known codes to saturate Shannon's well-known cost-performance bounds.

  1. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  2. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  3. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    SciTech Connect

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.; Burger, Alexander; Gorgen, Andreas; Guttormsen, Magne; Larsen, Ann -Cecilie; Massey, Thomas N.; Siem, Sunniva

    2013-11-08

    Proton double-differential cross sections from 59Co(α,p)62Ni, 57Fe(α,p)60Co, 56Fe(7Li,p)62Ni, and 55Mn(6Li,p)60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtained with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.

  4. NeuCode labels with parallel reaction monitoring for multiplexed, absolute protein quantification

    PubMed Central

    Potts, Gregory K.; Voigt, Emily A.; Bailey, Derek J.; Westphall, Michael S.; Hebert, Alexander S.; Yin, John; Coon, Joshua J.

    2016-01-01

    We introduce a new method to multiplex the throughput of samples for targeted mass spectrometry analysis. The current paradigm for obtaining absolute quantification from biological samples requires spiking isotopically heavy peptide standards into light biological lysates. Because each lysate must be run individually, this method places limitations on sample throughput and high demands on instrument time. When cell lines are first metabolically labeled with various neutron-encoded (NeuCode) lysine isotopologues possessing mDa mass differences from each other, heavy cell lysates may be mixed and spiked with an additional heavy peptide as an internal standard. We demonstrate that these NeuCode lysate peptides may be co-isolated with their internal standards, fragmented, and analyzed together using high resolving power parallel reaction monitoring (PRM). Instead of running each sample individually, these methods allow samples to be multiplexed to obtain absolute concentrations of target peptides in 5, 15, and even 25 biological samples at a time during single mass spectrometry experiments. PMID:26882330

  5. Simple models for reading neuronal population codes.

    PubMed Central

    Seung, H S; Sompolinsky, H

    1993-01-01

    In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models. PMID:8248166

  6. Modeling Guidelines for Code Generation in the Railway Signaling Context

    NASA Technical Reports Server (NTRS)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  7. A C library for retrieving specific reactions from the BioModels database

    PubMed Central

    Neal, M. L.; Galdzicki, M.; Gallimore, J. T.; Sauro, H. M.

    2014-01-01

    Summary: We describe libSBMLReactionFinder, a C library for retrieving specific biochemical reactions from the curated systems biology markup language models contained in the BioModels database. The library leverages semantic annotations in the database to associate reactions with human-readable descriptions, making the reactions retrievable through simple string searches. Our goal is to provide a useful tool for quantitative modelers who seek to accelerate modeling efforts through the reuse of previously published representations of specific chemical reactions. Availability and implementation: The library is open-source and dual licensed under the Mozilla Public License Version 2.0 and GNU General Public License Version 2.0. Project source code, downloads and documentation are available at http://code.google.com/p/lib-sbml-reaction-finder. Contact: mneal@uw.edu PMID:24078714

  8. DSD - A Particle Simulation Code for Modeling Dusty Plasmas

    NASA Astrophysics Data System (ADS)

    Joyce, Glenn; Lampe, Martin; Ganguli, Gurudas

    1999-11-01

    The NRL Dynamically Shielded Dust code (DSD) is a particle simulation code developed to study the behavior of strongly coupled, dusty plasmas. The model includes the electrostatic wake effects of plasma ions flowing through plasma electrons, collisions of dust and plasma particles with each other and with neutrals. The simulation model contains the short-range strong forces of a shielded Coulomb system, and the long-range forces that are caused by the wake. It also includes other effects of a flowing plasma such as drag forces. In order to model strongly coupled dust in plasmas, we make use of the techniques of molecular dynamics simulation, PIC simulation, and the "particle-particle/particle-mesh" (P3M) technique of Hockney and Eastwood. We also make use of the dressed test particle representation of Rostoker and Rosenbluth. Many of the techniques we use in the model are common to all PIC plasma simulation codes. The unique properties of the code follow from the accurate representation of both the short-range aspects of the interaction between dust grains, and long-range forces mediated by the complete plasma dielectric response. If the streaming velocity is zero, the potential used in the model reduces to the Debye-Huckel potential, and the simulation is identical to molecular dynamics models of the Yukawa potential. The plasma appears only implicitly through the plasma dispersion function, so it is not necessary in the code to resolve the fast plasma time scales.

  9. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  10. Code System to Model Aqueous Geochemical Equilibria.

    SciTech Connect

    PETERSON, S. R.

    2001-08-23

    Version: 00 MINTEQ is a geochemical program to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. It was developed for the Environmental Protection Agency to perform the calculations necessary to simulate the contact of waste solutions with heterogeneous sediments or the interaction of ground water with solidified wastes. MINTEQ can calculate ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution ofsolid phases. MINTEQ can accept a finite mass for any solid considered for dissolution and will dissolve the specified solid phase only until its initial mass is exhausted. This ability enables MINTEQ to model flow-through systems. In these systems the masses of solid phases that precipitate at earlier pore volumes can be dissolved at later pore volumes according to thermodynamic constraints imposed by the solution composition and solid phases present. The ability to model these systems permits evaluation of the geochemistry of dissolved traced metals, such as low-level waste in shallow land burial sites. MINTEQ was designed to solve geochemical equilibria for systems composed of one kilogram of water, various amounts of material dissolved in solution, and any solid materials that are present. Systems modeled using MINTEQ can exchange energy and material (open systems) or just energy (closed systems) with the surrounding environment. Each system is composed of a number of phases. Every phase is a region with distinct composition and physically definable boundaries. All of the material in the aqueous solution forms one phase. The gas phase is composed of any gaseous material present, and each compositionally and structurally distinct solid forms a separate phase.

  11. Coupling extended magnetohydrodynamic fluid codes with radiofrequency ray tracing codes for fusion modeling

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Held, Eric D.

    2015-09-01

    Neoclassical tearing modes are macroscopic (L ∼ 1 m) instabilities in magnetic fusion experiments; if unchecked, these modes degrade plasma performance and may catastrophically destroy plasma confinement by inducing a disruption. Fortunately, the use of properly tuned and directed radiofrequency waves (λ ∼ 1 mm) can eliminate these modes. Numerical modeling of this difficult multiscale problem requires the integration of separate mathematical models for each length and time scale (Jenkins and Kruger, 2012 [21]); the extended MHD model captures macroscopic plasma evolution while the RF model tracks the flow and deposition of injected RF power through the evolving plasma profiles. The scale separation enables use of the eikonal (ray-tracing) approximation to model the RF wave propagation. In this work we demonstrate a technique, based on methods of computational geometry, for mapping the ensuing RF data (associated with discrete ray trajectories) onto the finite-element/pseudospectral grid that is used to model the extended MHD physics. In the new representation, the RF data can then be used to construct source terms in the equations of the extended MHD model, enabling quantitative modeling of RF-induced tearing mode stabilization. Though our specific implementation uses the NIMROD extended MHD (Sovinec et al., 2004 [22]) and GENRAY RF (Smirnov et al., 1994 [23]) codes, the approach presented can be applied more generally to any code coupling requiring the mapping of ray tracing data onto Eulerian grids.

  12. Model-building codes for membrane proteins.

    SciTech Connect

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S.; Slepoy, Alexander; Sale, Kenneth L.; Young, Malin M.; Faulon, Jean-Loup Michel; Gray, Genetha Anne

    2005-01-01

    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  13. Multiview coding mode decision with hybrid optimal stopping model.

    PubMed

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  14. Comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reaction flows

    SciTech Connect

    Magnotti, F.; Diskin, G.; Matulaitis, J.; Chinitz, W.

    1984-01-01

    The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

  15. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reaction flows

    NASA Technical Reports Server (NTRS)

    Magnotti, F.; Diskin, G.; Matulaitis, J.; Chinitz, W.

    1984-01-01

    The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

  16. Heavy Ion Reaction Modeling for Hadrontherapy Applications

    SciTech Connect

    Cerutti, F.; Ferrari, A.; Enghardt, W.; Gadioli, E.; Mairani, A.; Parodi, K.; Sommerer, F.

    2007-10-26

    A comprehensive and reliable description of nucleus-nucleus interactions represents a crucial need in different interdisciplinary fields. In particular, hadrontherapy monitoring by means of in-beam positron emission tomography (PET) requires, in addition to measuring, the capability of calculating the activity of {beta}{sup +}-decaying nuclei produced in the irradiated tissue. For this purpose, in view of treatment monitoring at the Heidelberg Ion Therapy (HIT) facility, the transport and interaction Monte Carlo code FLUKA is a promising candidate. It is provided with the description of heavy ion reactions at intermediate and low energies by two specific event generators. In-beam PET experiments performed at GSI for a few beam-target combinations have been simulated and first comparisons between the measured and calculated {beta}{sup +}-activity are available.

  17. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  18. Data model description for the DESCARTES and CIDER codes

    SciTech Connect

    Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.

    1993-01-01

    The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes.

  19. Benchmarking of numerical codes against analytical solutions for multidimensional multicomponent diffusive transport coupled with precipitation-dissolution reactions and porosity changes

    NASA Astrophysics Data System (ADS)

    Hayek, M.; Kosakowski, G.; Jakob, A.; Churakov, S.

    2012-04-01

    Numerical computer codes dealing with precipitation-dissolution reactions and porosity changes in multidimensional reactive transport problems are important tools in geoscience. Recent typical applications are related to CO2 sequestration, shallow and deep geothermal energy, remediation of contaminated sites or the safe underground storage of chemotoxic and radioactive waste. Although the agreement between codes using the same models and similar numerical algorithms is satisfactory, it is known that the numerical methods used in solving the transport equation, as well as different coupling schemes between transport and chemistry, may lead to systematic discrepancies. Moreover, due to their inability to describe subgrid pore space changes correctly, the numerical approaches predict discretization-dependent values of porosity changes and clogging times. In this context, analytical solutions become an essential tool to verify numerical simulations. We present a benchmark study where we compare a two-dimensional analytical solution for diffusive transport of two solutes coupled with a precipitation-dissolution reaction causing porosity changes with numerical solutions obtained with the COMSOL Multiphysics code and with the reactive transport code OpenGeoSys-GEMS. The analytical solution describes the spatio-temporal evolution of solutes and solid concentrations and porosity. We show that both numerical codes reproduce the analytical solution very well, although distinct differences in accuracy can be traced back to specific numerical implementations.

  20. Cost effectiveness of the 1995 model energy code in Massachusetts

    SciTech Connect

    Lucas, R.G.

    1996-02-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1995 Model Energy Code (MEC) building thermal-envelope requirements for single-family houses and multifamily housing units in Massachusetts. The goal was to compare the cost effectiveness of the 1995 MEC to the energy conservation requirements of the Massachusetts State Building Code-based on a comparison of the costs and benefits associated with complying with each.. This comparison was performed for three cities representing three geographical regions of Massachusetts--Boston, Worcester, and Pittsfield. The analysis was done for two different scenarios: a ``move-up`` home buyer purchasing a single-family house and a ``first-time`` financially limited home buyer purchasing a multifamily condominium unit. Natural gas, oil, and electric resistance heating were examined. The Massachusetts state code has much more stringent requirements if electric resistance heating is used rather than other heating fuels and/or equipment types. The MEC requirements do not vary by fuel type. For single-family homes, the 1995 MEC has requirements that are more energy-efficient than the non-electric resistance requirements of the current state code. For multifamily housing, the 1995 MEC has requirements that are approximately equally energy-efficient to the non-electric resistance requirements of the current state code. The 1995 MEC is generally not more stringent than the electric resistance requirements of the state code, in fact; for multifamily buildings the 1995 MEC is much less stringent.

  1. DSMC modeling of flows with recombination reactions

    NASA Astrophysics Data System (ADS)

    Gimelshein, Sergey; Wysong, Ingrid

    2017-06-01

    An empirical microscopic recombination model is developed for the direct simulation Monte Carlo method that complements the extended weak vibrational bias model of dissociation. The model maintains the correct equilibrium reaction constant in a wide range of temperatures by using the collision theory to enforce the number of recombination events. It also strictly follows the detailed balance requirement for equilibrium gas. The model and its implementation are verified with oxygen and nitrogen heat bath relaxation and compared with available experimental data on atomic oxygen recombination in argon and molecular nitrogen.

  2. Towards many-body based nuclear reaction modelling

    NASA Astrophysics Data System (ADS)

    Hilaire, Stéphane; Goriely, Stéphane

    2016-06-01

    The increasing need for cross sections far from the valley of stability poses a challenge for nuclear reaction models. So far, predictions of cross sections have relied on more or less phenomenological approaches, depending on parameters adjusted to available experimental data or deduced from systematic expressions. While such predictions are expected to be reliable for nuclei not too far from the experimentally known regions, it is clearly preferable to use more fundamental approaches, based on sound physical principles, when dealing with very exotic nuclei. Thanks to the high computer power available today, all the ingredients required to model a nuclear reaction can now be (and have been) microscopically (or semi-microscopically) determined starting from the information provided by a nucleon-nucleon effective interaction. This concerns nuclear masses, optical model potential, nuclear level densities, photon strength functions, as well as fission barriers. All these nuclear model ingredients, traditionally given by phenomenological expressions, now have a microscopic counterpart implemented in the TALYS nuclear reaction code. We are thus now able to perform fully microscopic cross section calculations. The quality of these ingredients and the impact of using them instead of the usually adopted phenomenological parameters will be discussed. Perspectives for the coming years will be drawn on the improvements one can expect.

  3. Theory and modeling of stereoselective organic reactions.

    PubMed

    Houk, K N; Paddon-Row, M N; Rondan, N G; Wu, Y D; Brown, F K; Spellmeyer, D C; Metz, J T; Li, Y; Loncharich, R J

    1986-03-07

    Theoretical investigations of the transition structures of additions and cycloadditions reveal details about the geometries of bond-forming processes that are not directly accessible by experiment. The conformational analysis of transition states has been developed from theoretical generalizations about the preferred angle of attack by reagents on multiple bonds and predictions of conformations with respect to partially formed bonds. Qualitative rules for the prediction of the stereochemistries of organic reactions have been devised, and semi-empirical computational models have also been developed to predict the stereoselectivities of reactions of large organic molecules, such as nucleophilic additions to carbonyls, electrophilic hydroborations and cycloadditions, and intramolecular radical additions and cycloadditions.

  4. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  5. A combinatorial model for dentate gyrus sparse coding

    SciTech Connect

    Severa, William; Parekh, Ojas; James, Conrad D.; Aimone, James B.

    2016-12-29

    The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for two notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.

  6. A combinatorial model for dentate gyrus sparse coding

    DOE PAGES

    Severa, William; Parekh, Ojas; James, Conrad D.; ...

    2016-12-29

    The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for twomore » notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.« less

  7. Theory and Modeling of Asymmetric Catalytic Reactions.

    PubMed

    Lam, Yu-Hong; Grayson, Matthew N; Holland, Mareike C; Simon, Adam; Houk, K N

    2016-04-19

    Modern density functional theory and powerful contemporary computers have made it possible to explore complex reactions of value in organic synthesis. We describe recent explorations of mechanisms and origins of stereoselectivities with density functional theory calculations. The specific functionals and basis sets that are routinely used in computational studies of stereoselectivities of organic and organometallic reactions in our group are described, followed by our recent studies that uncovered the origins of stereocontrol in reactions catalyzed by (1) vicinal diamines, including cinchona alkaloid-derived primary amines, (2) vicinal amidophosphines, and (3) organo-transition-metal complexes. Two common cyclic models account for the stereoselectivity of aldol reactions of metal enolates (Zimmerman-Traxler) or those catalyzed by the organocatalyst proline (Houk-List). Three other models were derived from computational studies described in this Account. Cinchona alkaloid-derived primary amines and other vicinal diamines are venerable asymmetric organocatalysts. For α-fluorinations and a variety of aldol reactions, vicinal diamines form enamines at one terminal amine and activate electrophilically with NH(+) or NF(+) at the other. We found that the stereocontrolling transition states are cyclic and that their conformational preferences are responsible for the observed stereoselectivity. In fluorinations, the chair seven-membered cyclic transition states is highly favored, just as the Zimmerman-Traxler chair six-membered aldol transition state controls stereoselectivity. In aldol reactions with vicinal diamine catalysts, the crown transition states are favored, both in the prototype and in an experimental example, shown in the graphic. We found that low-energy conformations of cyclic transition states occur and control stereoselectivities in these reactions. Another class of bifunctional organocatalysts, the vicinal amidophosphines, catalyzes the (3 + 2) annulation

  8. Multisynaptic activity in a pyramidal neuron model and neural code.

    PubMed

    Ventriglia, Francesco; Di Maio, Vito

    2006-01-01

    The highly irregular firing of mammalian cortical pyramidal neurons is one of the most striking observation of the brain activity. This result affects greatly the discussion on the neural code, i.e. how the brain codes information transmitted along the different cortical stages. In fact it seems to be in favor of one of the two main hypotheses about this issue, named the rate code. But the supporters of the contrasting hypothesis, the temporal code, consider this evidence inconclusive. We discuss here a leaky integrate-and-fire model of a hippocampal pyramidal neuron intended to be biologically sound to investigate the genesis of the irregular pyramidal firing and to give useful information about the coding problem. To this aim, the complete set of excitatory and inhibitory synapses impinging on such a neuron has been taken into account. The firing activity of the neuron model has been studied by computer simulation both in basic conditions and allowing brief periods of over-stimulation in specific regions of its synaptic constellation. Our results show neuronal firing conditions similar to those observed in experimental investigations on pyramidal cortical neurons. In particular, the variation coefficient (CV) computed from the inter-spike intervals (ISIs) in our simulations for basic conditions is close to the unity as that computed from experimental data. Our simulation shows also different behaviors in firing sequences for different frequencies of stimulation.

  9. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  10. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.926b Section 200.926b Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  11. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.925c Section 200.925c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  12. Testing geochemical modeling codes using New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of selected portions of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will: (1) ensure that we are providing adequately for all significant processes occurring in natural systems; (2) determine the adequacy of the mathematical descriptions of the processes; (3) check the adequacy and completeness of thermodynamic data as a function of temperature for solids, aqueous species and gases; and (4) determine the sensitivity of model results to the manner in which the problem is conceptualized by the user and then translated into constraints in the code input. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions. The kinetics of silica precipitation in EQ6 will be tested using field data from silica-lined drain channels carrying hot water away from the Wairakei borefield.

  13. A bio-inspired sensor coupled with a bio-bar code and hybridization chain reaction for Hg(2+) assay.

    PubMed

    Xu, Huifeng; Zhu, Xi; Ye, Hongzhi; Yu, Lishuang; Chen, Guonan; Chi, Yuwu; Liu, Xianxiang

    2015-10-18

    In this article, a bio-inspired DNA sensor is developed, which is coupled with a bio-bar code and hybridization chain reaction. This bio-inspired sensor has a high sensitivity toward Hg(2+), and has been used to assay Hg(2+) in the extraction of Bauhinia championi with good satisfaction.

  14. Strontium Adsorption and Desorption Reactions in Model Drinking Water Distribution Systems

    DTIC Science & Technology

    2014-02-04

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 11-04-2014 Journal Article Strontium adsorption and desorption reactions in model... strontium (Sr2+) adsorption to and desorption from iron corrosion products were examined in two model drinking water distribution systems (DWDS...used to control Sr2; desorption. calcium carbonate; drinking water distribution system; α-FeOOH; iron; strontium ; XANES Unclassified

  15. Global Microscopic Models for Nuclear Reaction Calculations

    SciTech Connect

    Goriely, S.

    2005-05-24

    Important effort has been devoted in the last decades to measuring reaction cross sections. Despite such effort, many nuclear applications still require the use of theoretical predictions to estimate experimentally unknown cross sections. Most of the nuclear ingredients in the calculations of reaction cross sections need to be extrapolated in an energy and/or mass domain out of reach of laboratory simulations. In addition, some applications often involve a large number of unstable nuclei, so that only global approaches can be used. For these reasons, when the nuclear ingredients to the reaction models cannot be determined from experimental data, it is highly recommended to consider preferentially microscopic or semi-microscopic global predictions based on sound and reliable nuclear models which, in turn, can compete with more phenomenological highly-parameterized models in the reproduction of experimental data. The latest developments made in deriving such microscopic models for practical applications are reviewed. It mainly concerns nuclear structure properties (masses, deformations, radii, etc.), level densities at the equilibrium deformation, {gamma}-ray strength, as well as fission barriers and level densities at the fission saddle points.

  16. Modeling of the EAST ICRF antenna with ICANT Code

    SciTech Connect

    Qin Chengming; Zhao Yanping; Colas, L.; Heuraux, S.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  17. Modeling of the EAST ICRF antenna with ICANT Code

    NASA Astrophysics Data System (ADS)

    Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.

    2007-09-01

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  18. Dual Cauchy rate-distortion model for video coding

    NASA Astrophysics Data System (ADS)

    Zeng, Huanqiang; Chen, Jing; Cai, Canhui

    2014-07-01

    A dual Cauchy rate-distortion model is proposed for video coding. In our approach, the coefficient distribution of the integer transform is first studied. Then, based on the observation that the rate-distortion model of the luminance and that of the chrominance can be well expressed by separate Cauchy functions, a dual Cauchy rate-distortion model is presented. Furthermore, the simplified rate-distortion formulas are deduced to reduce the computational complexity of the proposed model without losing the accuracy. Experimental results have shown that the proposed model is better able to approximate the actual rate-distortion curve for various sequences with different motion activities.

  19. Development of a fan model for the CONTAIN code

    SciTech Connect

    Pevey, R.E.

    1987-01-08

    A fan model has been added to the CONTAIN code with a minimum of disruption of the standard CONTAIN calculation sequence. The user is required to supply a simple pressure vs. flow rate curve for each fan in his model configuration. Inclusion of the fan model required modification to two CONTAIN subroutines, IFLOW and EXEQNX. The two modified routines and the resulting executable module are located on the LANL mass storage system as /560007/iflow, /560007/exeqnx, and /560007/cont01, respectively. The model has been initially validated using a very simple sample problem and is ready for a more complete workout using the SRP reactor models from the RSRD probabilistic risk analysis.

  20. Self-shielding models of MICROX-2 code

    SciTech Connect

    Hou, J.; Ivanov, K.; Choi, H.

    2013-07-01

    The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. In the previous study, a new fine-group cross section library of the MICROX-2 was generated and tested against reference calculations and measurement data. In this study, existing physics models of the MICROX-2 are reviewed and updated to improve the physics calculation performance of the MICROX-2 code, including the resonance self-shielding model and spatial self-shielding factor. The updated self-shielding models have been verified through a series of benchmark calculations against the Monte Carlo code, using homogeneous and pin cell models selected for this study. The results have shown that the updates of the self-shielding factor calculation model are correct and improve the physics calculation accuracy even though the magnitude of error reduction is relatively small. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by approximately 0.1 % and 0.2% for the homogeneous and pin cell models, respectively, considered in this study. (authors)

  1. Model representation in the PANCOR wall interference assessment code

    NASA Technical Reports Server (NTRS)

    Al-Saadi, Jassim A.

    1991-01-01

    An investigation into the aircraft model description requirements of a wall interference assessment and correction code known as PANCOR was conducted. The accuracy necessary in specifying various elements of the model description were defined. It was found that the specified lift coefficient is the most important model parameter in the wind tunnel simulation. An accurate specification of the model volume was also found to be important. Also developed was a partially automated technique for generating wing lift distributions that are required as input to PANCOR. An existing three dimensional transonic small disturbance code was modified to provide the necessary information. A group of auxiliary computer programs and procedures was developed to help generate the required input for PANCOR.

  2. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  3. Monte Carlo FLUKA code simulation for study of 68Ga production by direct proton-induced reaction

    NASA Astrophysics Data System (ADS)

    Mokhtari Oranj, Leila; Kakavand, Tayeb; Sadeghi, Mahdi; Aboudzadeh Rovias, Mohammadreza

    2012-06-01

    68Ga is an important radionuclide for positron emission tomography. 68Ga can be produced by the 68Zn(p,n)68Ga reaction in a common biomedical cyclotrons. To facilitate optimization of target design and study activation of materials, Monte Carlo code can be used to simulate the irradiation of the target materials with charged hadrons. In this paper, FLUKA code simulation was employed to prototype a Zn target for the production of 68Ga by proton irradiation. Furthermore, the experimental data were compared with the estimated values for the thick target yield produced in the irradiation time according to FLUKA code. In conclusion, FLUKA code can be used for estimation of the production yield.

  4. Enhancements to the SSME transfer function modeling code

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  5. Delayed photo-emission model for beam optics codes

    DOE PAGES

    Jensen, Kevin L.; Petillo, John J.; Panagos, Dimitrios N.; ...

    2016-11-22

    Future advanced light sources and x-ray Free Electron Lasers require fast response from the photocathode to enable short electron pulse durations as well as pulse shaping, and so the ability to model delays in emission is needed for beam optics codes. The development of a time-dependent emission model accounting for delayed photoemission due to transport and scattering is given, and its inclusion in the Particle-in-Cell code MICHELLE results in changes to the pulse shape that are described. Furthermore, the model is applied to pulse elongation of a bunch traversing an rf injector, and to the smoothing of laser jitter onmore » a short pulse.« less

  6. Using cryptology models for protecting PHP source code

    NASA Astrophysics Data System (ADS)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  7. Discovering binary codes for documents by learning deep generative models.

    PubMed

    Hinton, Geoffrey; Salakhutdinov, Ruslan

    2011-01-01

    We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document.

  8. Code development for ITER edge modelling - SOLPS5.1

    NASA Astrophysics Data System (ADS)

    Bonnin, X.; Kukushkin, A. S.; Coster, D. P.

    2009-06-01

    Most ITER divertor modelling work to date used the B2-EIRENE (SOLPS4) code package, coupling a 2D fluid description of the charged plasma species (B2) to a Monte-Carlo kinetic description of the neutrals (EIRENE). In recent years, the emphasis at ITER has been on completing the neutral model, including neutral-neutral collisions, opacity effects, radiation transport, etc. Elsewhere, new physics, numerics, and algorithmic improvements, such as E × B and diamagnetic drifts, electric currents, ion and neutral heat and particle flux limits, wall material mixing and surface temperature evolution, and bundling of heavy ions species, as well as switching to cell-centred velocities and using an internal energy instead of a total energy equation, gave birth to the B2.5 code, combined with EIRENE as SOLPS5. We report on work in progress to merge these advances with the ITER-specific model of the edge and divertor.

  9. Dual coding: a cognitive model for psychoanalytic research.

    PubMed

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code

  10. A spectral synthesis code for rapid modelling of supernovae

    NASA Astrophysics Data System (ADS)

    Kerzendorf, Wolfgang E.; Sim, Stuart A.

    2014-05-01

    We present TARDIS - an open-source code for rapid spectral modelling of supernovae (SNe). Our goal is to develop a tool that is sufficiently fast to allow exploration of the complex parameter spaces of models for SN ejecta. This can be used to analyse the growing number of high-quality SN spectra being obtained by transient surveys. The code uses Monte Carlo methods to obtain a self-consistent description of the plasma state and to compute a synthetic spectrum. It has a modular design to facilitate the implementation of a range of physical approximations that can be compared to assess both accuracy and computational expediency. This will allow users to choose a level of sophistication appropriate for their application. Here, we describe the operation of the code and make comparisons with alternative radiative transfer codes of differing levels of complexity (SYN++, PYTHON and ARTIS). We then explore the consequence of adopting simple prescriptions for the calculation of atomic excitation, focusing on four species of relevance to Type Ia SN spectra - Si II, S II, Mg II and Ca II. We also investigate the influence of three methods for treating line interactions on our synthetic spectra and the need for accurate radiative rate estimates in our scheme.

  11. Transform Coding for Point Clouds Using a Gaussian Process Model.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2017-04-28

    We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.

  12. Chemical TOPAZ: Modifications to the heat transfer code TOPAZ: The addition of chemical reaction kinetics and chemical mixtures

    SciTech Connect

    Nichols, A.L. III.

    1990-06-07

    This is a report describing the modifications which have been made to the heat flow code TOPAZ to allow the inclusion of thermally controlled chemical kinetics. This report is broken into parts. The first part is an introduction to the general assumptions and theoretical underpinning that were used to develop the model. The second section describes the changes that have been implemented into the code. The third section is the users manual for the input for the code. The fourth section is a compilation of hints, common errors, and things to be aware of while you are getting started. The fifth section gives a sample problem using the new code. This manual addenda is written with the presumption that most readers are not fluent with chemical concepts. Therefore, we shall in this section endeavor to describe the requirements that must be met before chemistry can occur and how we have modeled the chemistry in the code.

  13. Reaction-contingency based bipartite Boolean modelling

    PubMed Central

    2013-01-01

    Background Intracellular signalling systems are highly complex, rendering mathematical modelling of large signalling networks infeasible or impractical. Boolean modelling provides one feasible approach to whole-network modelling, but at the cost of dequantification and decontextualisation of activation. That is, these models cannot distinguish between different downstream roles played by the same component activated in different contexts. Results Here, we address this with a bipartite Boolean modelling approach. Briefly, we use a state oriented approach with separate update rules based on reactions and contingencies. This approach retains contextual activation information and distinguishes distinct signals passing through a single component. Furthermore, we integrate this approach in the rxncon framework to support automatic model generation and iterative model definition and validation. We benchmark this method with the previously mapped MAP kinase network in yeast, showing that minor adjustments suffice to produce a functional network description. Conclusions Taken together, we (i) present a bipartite Boolean modelling approach that retains contextual activation information, (ii) provide software support for automatic model generation, visualisation and simulation, and (iii) demonstrate its use for iterative model generation and validation. PMID:23835289

  14. A C-code for the double folding interaction potential for reactions involving deformed target nuclei

    NASA Astrophysics Data System (ADS)

    Gontchar, I. I.; Chushnyakova, M. V.

    2013-01-01

    We present a C-code designed to obtain the interaction potential between a spherical projectile nucleus and an axial-symmetrical deformed target nucleus and in particular to find the Coulomb barrier, by using the double folding model (DFM). The program calculates the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei as well as of the angle between the axis of symmetry of the target nucleus and the beam direction. The most important output parameters are the Coulomb barrier energy and the radius. Since many researchers use a Woods-Saxon profile for the nuclear term of the potential we provide an option in our code for fitting the DFM potential by such a profile near the barrier. Program summaryProgram title: DFMDEF Catalogue identifier: AENI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2245 No. of bytes in distributed program, including test data, etc.: 215442 Distribution format: tar.gz Programming language: C. Computer: PC, Mac. Operating system: Windows XP (with the GCC-compiler version 2), MacOS, Linux. RAM: 100 MB with average parameters set Classification: 17.9. Nature of problem: The code calculates in a semimicroscopic way the bare interaction potential between a spherical projectile nucleus and a deformed but axially symmetric target nucleus as a function of the center of mass distance as well as of the angle between the axis of symmetry of the target nucleus and the beam direction. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by a conventional Woods-Saxon profile near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e

  15. No-Core Shell Model and Reactions

    SciTech Connect

    Navratil, Petr; Ormand, W. Erich; Caurier, Etienne; Bertulani, Carlos

    2005-10-14

    There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+6Li and 6He+p scattering as well as a calculation of the astrophysically important 7Be(p,{gamma})8B S-factor.

  16. No-Core Shell Model and Reactions

    SciTech Connect

    Navratil, P; Ormand, W E; Caurier, E; Bertulani, C

    2005-04-29

    There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+{sup 6}Li and {sup 6}He+p scattering as well as a calculation of the astrophysically important {sup 7}Be(p, {gamma}){sup 8}B S-factor.

  17. Theory and modeling of stereoselective organic reactions

    SciTech Connect

    Houk, K.N.; Paddon-Row, M.N.; Rondan, N.G.; Wu, Y.D.; Brown, F.K.; Spellmeyer, D.C.; Metz, J.T.; Li, Y.; Loncharich, R.J.

    1986-03-07

    Theoretical investigations of the transition structures of additions and cycloadditions reveal details about the geometrics of bond-forming processes that are not directly accessible by experiment. The conformational analysis of transition states has been developed from theoretical generalizations about the preferred angle of attack by reagents on multiple bonds and predictions of conformations with respect to partially formed bonds. Qualitative rules for the prediction of the stereochemistries of organic reactions have been devised, and semi-empirical computational models have also been developed to predict the stereoselectivities of reactions of large organic molecules, such as nucleophilic additions to carbonyls, electrophilic hydroborations and cycloadditions, and intramolecular radical additions and cycloadditions. 52 references, 7 figures.

  18. Cracking the Dual Code: Toward a Unitary Model of Phoneme Identification

    PubMed Central

    Foss, Donald J.; Gernsbacher, Morton Ann

    2014-01-01

    The results of five experiments on the nature of the speech code and on the role of sentence context on speech processing are reported. The first three studies test predictions from the dual code model of phoneme identification (Foss, D. J., & Blank, M. A. Cognitive Psychology, 1980, 12, 1–31). According to that model, subjects in a phoneme monitoring experiment respond to a prelexical code when engaged in a relatively easy task, and to a postlexical code when the task is difficult. The experiments controlled ease of processing either by giving subjects multiple targets for which to monitor or by preceding the target with a similar-sounding phoneme that draws false alarms. The predictions from the model were not sustained. Furthermore, evidence for a paradoxical nonword superiority effect was observed. In Experiment IV reaction times (RTs) to all possible /d/-initial CVCs were gathered. RTs were unaffected by the target item's status as a word or nonword. but they were affected by the internal phonetic structure of the target-bearing item. Vowel duration correlated highly (0.627) with RTs. Experiment V examined previous work purporting to demonstrate that semantic predictability affects how the speech code is processed, in particular that semantic predictability leads to responses based upon a postlexical code. That study found “predictability” effects when words occurred in isolation; further, it found that vowel duration and other phonetic factors can account parsimoniously for the existing results. These factors also account for the apparent nonword superiority effects observed earlier. Implications of the present work for theoretical models that stress the interaction between semantic context and speech processing are discussed, as are implications for use of the phoneme monitoring task. PMID:25520528

  19. Improvement of Basic Fluid Dynamics Models for the COMPASS Code

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  20. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  1. Plutonium explosive dispersal modeling using the MACCS2 computer code

    SciTech Connect

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  2. New Mechanical Model for the Transmutation Fuel Performance Code

    SciTech Connect

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  3. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  4. Exact energy conservation in hybrid meshless model/code

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.

    2008-11-01

    Energy conservation is an important issue for both PIC and hybrid models. In hybrid codes the ions are treated kinetically and the electrons are described as a massless charge-neutralizing fluid. Our recently developed Particle-In-Cloud-Of-Points (PICOP) approach [1], which uses an adaptive meshless technique to compute electromagnetic fields on a cloud of computational points, is applied to a hybrid model. An exact energy conservation numerical scheme, which describes the interaction between geometrical space, where the electromagnetic fields are computed, and particle/velocity space, is presented. Having being utilized in a new PICOP hybrid code, the algorithm had demonstrated accurate energy conservation in the numerical simulation of two counter streaming plasma beams instability. [1] S. A. Galkin, B. P. Cluggish, J. S. Kim, S. Yu. Medvedev ``Advansed PICOP Algorithm with Adaptive Meshless Field Solver'', Published in the IEEE PPPS/ICOP 2007 Conference proceedings, pp. 1445-1448, Albuquerque, New Mexico, June 17-22, 2007.

  5. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  6. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-12-09

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{_}summary.html.

  7. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-03-15

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse 'slice' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{sub s}ummary.html.

  8. Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.

    SciTech Connect

    Blann, Marshall

    2014-11-01

    Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometry Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.

  9. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  10. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  11. Modeling of transient dust events in fusion edge plasmas with DUSTT-UEDGE code

    NASA Astrophysics Data System (ADS)

    Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; Rognlien, T. D.

    2016-10-01

    It is well known that dust can be produced in fusion devices due to various processes involving structural damage of plasma exposed materials. Recent computational and experimental studies have demonstrated that dust production and associated with it plasma contamination can present serious challenges in achieving sustained fusion reaction in future fusion devices, such as ITER. To analyze the impact, which dust can have on performance of fusion plasmas, modeling of coupled dust and plasma transport with DUSTT-UEDGE code is used by the authors. In past, only steady-state computational studies, presuming continuous source of dust influx, were performed due to iterative nature of DUSTT-UEDGE code coupling. However, experimental observations demonstrate that intermittent injection of large quantities of dust, often associated with transient plasma events, may severely impact fusion plasma conditions and even lead to discharge termination. In this work we report on progress in coupling of DUSTT-UEDGE codes in time-dependent regime, which allows modeling of transient dust-plasma transport processes. The methodology and details of the time-dependent code coupling, as well as examples of simulations of transient dust-plasma transport phenomena will be presented. These include time-dependent modeling of impact of short out-bursts of different quantities of tungsten dust in ITER divertor on the edge plasma parameters. The plasma response to the out-bursts with various duration, location, and ejected dust sizes will be analyzed.

  12. Time-dependent recycling modeling with edge plasma transport codes

    NASA Astrophysics Data System (ADS)

    Pigarov, A.; Krasheninnikov, S.; Rognlien, T.; Taverniers, S.; Hollmann, E.

    2013-10-01

    First,we discuss extensions to Macroblob approach which allow to simulate more accurately dynamics of ELMs, pedestal and edge transport with UEDGE code. Second,we present UEDGE modeling results for H mode discharge with infrequent ELMs and large pedestal losses on DIII-D. In modeled sequence of ELMs this discharge attains a dynamic equilibrium. Temporal evolution of pedestal plasma profiles, spectral line emission, and surface temperature matching experimental data over ELM cycle is discussed. Analysis of dynamic gas balance highlights important role of material surfaces. We quantified the wall outgassing between ELMs as 3X the NBI fueling and the recycling coefficient as 0.8 for wall pumping via macroblob-wall interactions. Third,we also present results from multiphysics version of UEDGE with built-in, reduced, 1-D wall models and analyze the role of various PMI processes. Progress in framework-coupled UEDGE/WALLPSI code is discussed. Finally, implicit coupling schemes are important feature of multiphysics codes and we report on the results of parametric analysis of convergence and performance for Picard and Newton iterations in a system of coupled deterministic-stochastic ODE and proposed modifications enhancing convergence.

  13. Implementing Subduction Models in the New Mantle Convection Code Aspect

    NASA Astrophysics Data System (ADS)

    Arredondo, Katrina; Billen, Magali

    2014-05-01

    The geodynamic community has utilized various numerical modeling codes as scientific questions arise and computer processing power increases. Citcom, a widely used mantle convection code, has limitations and vulnerabilities such as temperature overshoots of hundreds or thousands degrees Kelvin (i.e., Kommu et al., 2013). Recently Aspect intended as a more powerful cousin, is in active development with additions such as Adaptable Mesh Refinement (AMR) and improved solvers (Kronbichler et al., 2012). The validity and ease of use of Aspect is important to its survival and role as a possible upgrade and replacement to Citcom. Development of publishable models illustrates the capacity of Aspect. We present work on the addition of non-linear solvers and stress-dependent rheology to Aspect. With a solid foundational knowledge of C++, these additions were easily added into Aspect and tested against CitcomS. Time-dependent subduction models akin to those in Billen and Hirth (2007) are built and compared in CitcomS and Aspect. Comparison with CitcomS assists in Aspect development and showcases its flexibility, usability and capabilities. References: Billen, M. I., and G. Hirth, 2007. Rheologic controls on slab dynamics. Geochemistry, Geophysics, Geosystems. Kommu, R., E. Heien, L. H. Kellogg, W. Bangerth, T. Heister, E. Studley, 2013. The Overshoot Phenomenon in Geodynamics Codes. American Geophysical Union Fall Meeting. M. Kronbichler, T. Heister, W. Bangerth, 2012, High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophys. J. Int.

  14. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    SciTech Connect

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  15. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  16. Modelling enzyme reaction mechanisms, specificity and catalysis.

    PubMed

    Mulholland, Adrian J

    2005-10-15

    Modern modelling methods can now give uniquely detailed understanding of enzyme-catalyzed reactions, including the analysis of mechanisms and the identification of determinants of specificity and catalytic efficiency. A new field of computational enzymology has emerged that has the potential to contribute significantly to structure-based design and to develop predictive models of drug metabolism and, for example, of the effects of genetic polymorphisms. This review outlines important techniques in this area, including quantum-chemical model studies and combined quantum-mechanics and molecular-mechanics (QM/MM) methods. Some recent applications to enzymes of pharmacological interest are also covered, showing the types of problems that can be tackled and the insight they can give.

  17. Photochemical reactions of various model protocell systems

    NASA Technical Reports Server (NTRS)

    Folsome, C. E.

    1986-01-01

    Models for the emergence of cellular life on the primitive Earth, and for physical environments of that era have been studied that embody these assumptions: (1) pregenetic cellular forms were phase-bounded systems primarily photosynthetic in nature, and (2) the early Earth environment was anoxic (lacking appreciable amounts of free hydrogen). It was found that organic structures can also be formed under anoxic conditions (N2, CO3=, H2O) by protracted longwavelength UV radiation. Apparently these structures form initially as organic layers upon CaCO3 crystalloids. The question remains as to whether the UV photosynthetic ability of such phase bounded structures is a curiosity, or a general property of phase bounded systems which is of direct interest to the emergence of cellular life. The question of the requirement and sailient features of a phase boundary for UV photosynthetic abilities was addressed by searching for similar general physical properties which might be manifest in a variety of other simple protocell-like structures. Since it has been shown that laboratory protocell models can effect the UV photosynthesis of low molecular weight compounds, this reaction is being used as an assay to survey other types of structures for similar UV photosynthetic reactions. Various kinds of structures surveyed are: (1) proteinoids; (2) liposomes; (3) reconstituted cell membrane spheroids; (4) coacervates; and (5) model protocells formed under anoxic conditions.

  18. Photochemical reactions of various model protocell systems

    NASA Technical Reports Server (NTRS)

    Folsome, C. E.

    1986-01-01

    Models for the emergence of cellular life on the primitive Earth, and for physical environments of that era have been studied that embody these assumptions: (1) pregenetic cellular forms were phase-bounded systems primarily photosynthetic in nature, and (2) the early Earth environment was anoxic (lacking appreciable amounts of free hydrogen). It was found that organic structures can also be formed under anoxic conditions (N2, CO3=, H2O) by protracted longwavelength UV radiation. Apparently these structures form initially as organic layers upon CaCO3 crystalloids. The question remains as to whether the UV photosynthetic ability of such phase bounded structures is a curiosity, or a general property of phase bounded systems which is of direct interest to the emergence of cellular life. The question of the requirement and sailient features of a phase boundary for UV photosynthetic abilities was addressed by searching for similar general physical properties which might be manifest in a variety of other simple protocell-like structures. Since it has been shown that laboratory protocell models can effect the UV photosynthesis of low molecular weight compounds, this reaction is being used as an assay to survey other types of structures for similar UV photosynthetic reactions. Various kinds of structures surveyed are: (1) proteinoids; (2) liposomes; (3) reconstituted cell membrane spheroids; (4) coacervates; and (5) model protocells formed under anoxic conditions.

  19. An Interoceptive Predictive Coding Model of Conscious Presence

    PubMed Central

    Seth, Anil K.; Suzuki, Keisuke; Critchley, Hugo D.

    2011-01-01

    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness. PMID:22291673

  20. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.

  1. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    PubMed

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  3. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  4. Modelling of LOCA Tests with the BISON Fuel Performance Code

    SciTech Connect

    Williamson, Richard L; Pastore, Giovanni; Novascone, Stephen Rhead; Spencer, Benjamin Whiting; Hales, Jason Dean

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  5. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Workman Mill Road, Whittier, California 90601. (2) National Electrical Code, NFPA 70, 1993 Edition... Building Officials and Code Administrators International, Inc., 4051 West Flossmoor Road, Country Club... the Southern Building Code Congress International, Inc., 900 Montclair Road, Birmingham, Alabama 35213...

  6. Leveraging Modeling Approaches: Reaction Networks and Rules

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349

  7. Leveraging modeling approaches: reaction networks and rules.

    PubMed

    Blinov, Michael L; Moraru, Ion I

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.

  8. A Mutation Model from First Principles of the Genetic Code.

    PubMed

    Thorvaldsen, Steinar

    2016-01-01

    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  9. Geochemical controls on shale groundwaters: Results of reaction path modeling

    SciTech Connect

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs.

  10. A model code for the radiative theta pinch

    SciTech Connect

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-15

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  11. A model code for the radiative theta pinch

    NASA Astrophysics Data System (ADS)

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-01

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor fm. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  12. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    SciTech Connect

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-07-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  13. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  14. Auditory information coding by modeled cochlear nucleus neurons.

    PubMed

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  15. Development and Validation of Reaction Wheel Disturbance Models: Empirical Model

    NASA Astrophysics Data System (ADS)

    Masterson, R. A.; Miller, D. W.; Grogan, R. L.

    2002-01-01

    Accurate disturbance models are necessary to predict the effects of vibrations on the performance of precision space-based telescopes, such as the Space Interferometry Mission (SIM). There are many possible disturbance sources on such spacecraft, but mechanical jitter from the reaction wheel assembly (RWA) is anticipated to be the largest. A method has been developed and implemented in the form of a MATLAB toolbox to extract parameters for an empirical disturbance model from RWA micro-vibration data. The disturbance model is based on one that was used to predict the vibration behaviour of the Hubble Space Telescope (HST) wheels and assumes that RWA disturbances consist of discrete harmonics of the wheel speed with amplitudes proportional to the wheel speed squared. The MATLAB toolbox allows the extension of this empirical disturbance model for application to any reaction wheel given steady state vibration data. The toolbox functions are useful for analyzing RWA vibration data, and the model provides a good estimate of the disturbances over most wheel speeds. However, it is shown that the disturbances are under-predicted by a model of this form over some wheel speed ranges. The poor correlation is due to the fact that the empirical model does not account for disturbance amplifications caused by interactions between the harmonics and the structural modes of the wheel. Experimental data from an ITHACO Space Systems E-type reaction wheel are used to illustrate the model development and validation process.

  16. Physics models in the toroidal transport code PROCTR

    SciTech Connect

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  17. Estimating neutron dose equivalent rates from heavy ion reactions around 10 MeV amu(-1) using the PHITS code.

    PubMed

    Iwamoto, Yosuke; Ronningen, R M; Niita, Koji

    2010-04-01

    It has been sometimes necessary for personnel to work in areas where low-energy heavy ions interact with targets or with beam transport equipment and thereby produce significant levels of radiation. Methods to predict doses and to assist shielding design are desirable. The Particle and Heavy Ion Transport code System (PHITS) has been typically used to predict radiation levels around high-energy (above 100 MeV amu(-1)) heavy ion accelerator facilities. However, predictions by PHITS of radiation levels around low-energy (around 10 MeV amu(-1)) heavy ion facilities to our knowledge have not yet been investigated. The influence of the "switching time" in PHITS calculations of low-energy heavy ion reactions, defined as the time when the JAERI Quantum Molecular Dynamics model (JQMD) calculation stops and the Generalized Evaporation Model (GEM) calculation begins, was studied using neutron energy spectra from 6.25 MeV amu(-1) and 10 MeV amu(-1) (12)C ions and 10 MeV amu(-1) (16)O ions incident on a copper target. Using a value of 100 fm c(-1) for the switching time, calculated neutron energy spectra obtained agree well with the experimental data. PHITS was then used with the switching time of 100 fm c(-1) to simulate an experimental study by Ohnesorge et al. by calculating neutron dose equivalent rates produced by 3 MeV amu(-1) to 16 MeV amu(-1) (12)C, (14)N, (16)O, and (20)Ne beams incident on iron, nickel and copper targets. The calculated neutron dose equivalent rates agree very well with the data and follow a general pattern which appears to be insensitive to the heavy ion species but is sensitive to the target material.

  18. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  19. New Direction in Hydrogeochemical Transport Modeling: Incorporating Multiple Kinetic and Equilibrium Reaction Pathways

    SciTech Connect

    Steefel, C.I.

    2000-02-02

    At least two distinct kinds of hydrogeochemical models have evolved historically for use in analyzing contaminant transport, but each has important limitations. One kind, focusing on organic contaminants, treats biodegradation reactions as parts of relatively simple kinetic reaction networks with no or limited coupling to aqueous and surface complexation and mineral dissolution/precipitation reactions. A second kind, evolving out of the speciation and reaction path codes, is capable of handling a comprehensive suite of multicomponent complexation (aqueous and surface) and mineral precipitation and dissolution reactions, but has not been able to treat reaction networks characterized by partial redox disequilibrium and multiple kinetic pathways. More recently, various investigators have begun to consider biodegradation reactions in the context of comprehensive equilibrium and kinetic reaction networks (e.g. Hunter et al. 1998, Mayer 1999). Here we explore two examples of multiple equilibrium and kinetic reaction pathways using the reactive transport code GIMRT98 (Steefel, in prep.): (1) a computational example involving the generation of acid mine drainage due to oxidation of pyrite, and (2) a computational/field example where the rates of chlorinated VOC degradation are linked to the rates of major redox processes occurring in organic-rich wetland sediments overlying a contaminated aerobic aquifer.

  20. Code-to-code benchmark tests for 3D simulation models dedicated to the extraction region in negative ion sources

    NASA Astrophysics Data System (ADS)

    Nishioka, S.; Mochalskyy, S.; Taccogna, F.; Hatayama, A.; Fantz, U.; Minelli, P.

    2017-08-01

    The development of the kinetic particle model for the extraction region in negative hydrogen ion sources is indispensable and helpful to clarify the H- beam extraction physics. Recently, various 3D kinetic particle codes have been developed to study the extraction mechanism. Direct comparison between each other has not yet been done. Therefore, we have carried out a code-to-code benchmark activity to validate our codes. In the present study, the progress in this benchmark activity is summarized. At present, the reasonable agreement with the result by each code have been obtained using realistic plasma parameters at least for the following items; (1) Potential profile in the case of the vacuum condition (2) Temporal evolution of extracted current densities and profiles of electric potential in the case of the plasma consisting of only electrons and positive ions.

  1. Acoustic Gravity Wave Chemistry Model for the RAYTRACE Code.

    DTIC Science & Technology

    2014-09-26

    AU)-AI56 850 ACOlUSTIC GRAVITY WAVE CHEMISTRY MODEL FOR THE IAYTRACE I/~ CODE(U) MISSION RESEARCH CORP SANTA BARBIARA CA T E OLD Of MAN 84 MC-N-SlS...DNA-TN-S4-127 ONAOOI-BO-C-0022 UNLSSIFIlED F/O 20/14 NL 1-0 2-8 1111 po 312.2 1--I 11111* i •. AD-A 156 850 DNA-TR-84-127 ACOUSTIC GRAVITY WAVE...Hicih Frequency Radio Propaoation Acoustic Gravity Waves 20. ABSTRACT (Continue en reveree mide if tteceeemr and Identify by block number) This

  2. Multiphoton dissociation and thermal unimolecular reactions induced by infrared lasers. [REAMPA code

    SciTech Connect

    Dai, H.L.

    1981-04-01

    Multiphoton dissociation (MPD) of ethyl chloride was studied using a tunable 3.3 ..mu..m laser to excite CH stretches. The absorbed energy increases almost linearly with fluence, while for 10 ..mu..m excitation there is substantial saturation. Much higher dissociation yields were observed for 3.3 ..mu..m excitation than for 10 ..mu..m excitation, reflecting bottlenecking in the discrete region of 10 ..mu..m excitation. The resonant nature of the excitation allows the rate equations description for transitions in the quasicontinuum and continuum to be extended to the discrete levels. Absorption cross sections are estimated from ordinary ir spectra. A set of cross sections which is constant or slowly decreasing with increasing vibrational excitation gives good fits to both absorption and dissociation yield data. The rate equations model was also used to quantitatively calculate the pressure dependence of the MPD yield of SF/sub 6/ caused by vibrational self-quenching. Between 1000-3000 cm/sup -1/ of energy is removed from SF/sub 6/ excited to approx. > 60 kcal/mole by collision with a cold SF/sub 6/ molecule at gas kinetic rate. Calculation showed the fluence dependence of dissociation varies strongly with the gas pressure. Infrared multiphoton excitation was applied to study thermal unimolecular reactions. With SiF/sub 4/ as absorbing gas for the CO/sub 2/ laser pulse, transient high temperature pulses were generated in a gas mixture. IR fluorescence from the medium reflected the decay of the temperature. The activation energy and the preexponential factor of the reactant dissociation were obtained from a phenomenological model calculation. Results are presented in detail. (WHK)

  3. The Overlap Model: A Model of Letter Position Coding

    ERIC Educational Resources Information Center

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…

  4. The Overlap Model: A Model of Letter Position Coding

    ERIC Educational Resources Information Center

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…

  5. Crucial steps to life: From chemical reactions to code using agents.

    PubMed

    Witzany, Guenther

    2016-02-01

    The concepts of the origin of the genetic code and the definitions of life changed dramatically after the RNA world hypothesis. Main narratives in molecular biology and genetics such as the "central dogma," "one gene one protein" and "non-coding DNA is junk" were falsified meanwhile. RNA moved from the transition intermediate molecule into centre stage. Additionally the abundance of empirical data concerning non-random genetic change operators such as the variety of mobile genetic elements, persistent viruses and defectives do not fit with the dominant narrative of error replication events (mutations) as being the main driving forces creating genetic novelty and diversity. The reductionistic and mechanistic views on physico-chemical properties of the genetic code are no longer convincing as appropriate descriptions of the abundance of non-random genetic content operators which are active in natural genetic engineering and natural genome editing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Computer codes for the evaluation of thermodynamic properties, transport properties, and equilibrium constants of an 11-species air model

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1990-01-01

    The computer codes developed provide data to 30000 K for the thermodynamic and transport properties of individual species and reaction rates for the prominent reactions occurring in an 11-species nonequilibrium air model. These properties and the reaction-rate data are computed through the use of curve-fit relations which are functions of temperature (and number density for the equilibrium constant). The curve fits were made using the most accurate data believed available. A detailed review and discussion of the sources and accuracy of the curve-fitted data used herein are given in NASA RP 1232.

  7. Model Experiment of Thermal Runaway Reactions Using the Aluminum-Hydrochloric Acid Reaction

    ERIC Educational Resources Information Center

    Kitabayashi, Suguru; Nakano, Masayoshi; Nishikawa, Kazuyuki; Koga, Nobuyoshi

    2016-01-01

    A laboratory exercise for the education of students about thermal runaway reactions based on the reaction between aluminum and hydrochloric acid as a model reaction is proposed. In the introductory part of the exercise, the induction period and subsequent thermal runaway behavior are evaluated via a simple observation of hydrogen gas evolution and…

  8. Model Experiment of Thermal Runaway Reactions Using the Aluminum-Hydrochloric Acid Reaction

    ERIC Educational Resources Information Center

    Kitabayashi, Suguru; Nakano, Masayoshi; Nishikawa, Kazuyuki; Koga, Nobuyoshi

    2016-01-01

    A laboratory exercise for the education of students about thermal runaway reactions based on the reaction between aluminum and hydrochloric acid as a model reaction is proposed. In the introductory part of the exercise, the induction period and subsequent thermal runaway behavior are evaluated via a simple observation of hydrogen gas evolution and…

  9. The overlap model: a model of letter position coding.

    PubMed

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-07-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that the position of each letter within a word is perfectly encoded. Thus, these models are unable to explain the presence of effects of letter transposition (trial-trail), letter migration (beard-bread), repeated letters (moose-mouse), or subset/superset effects (faulty-faculty). The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with these effects. The basic assumption is that letters in the visual stimulus have distributions over positions so that the representation of one letter will extend into adjacent letter positions. To test the model, the authors conducted a series of forced-choice perceptual identification experiments. The overlap model produced very good fits to the empirical data, and even a simplified 2-parameter model was capable of producing fits for 104 observed data points with a correlation coefficient of .91. Copyright (c) 2008 APA, all rights reserved.

  10. Integrated Codes Model for Erosion-Deposition in Long Discharges

    SciTech Connect

    Hogan, John T

    2006-08-01

    There is increasing interest in understanding the mechanisms causing the deuterium retention rates which are observed in the longest high power tokamak discharges, and its possible relation to near term choices which must be made for plasma-facing components in next generation devices [1]. Both co-deposition and bulk diffusion models are regarded as potentially relevant. This contribution describes a global model for the co-depositio axis of this dilemma, which includes as many of the relevant processes which may contribute to it as is computationally feasible, following the 'maximal ordering / minimal simplification' strategy described in Kruskal's "Asymptotology" [2]. The global model is interpretative, meaning that some key information describing the bulk plasma is provided by experimental measurement, and the models for the impurity processes relevant to retention, given this measured background, are simulated and compared with other data. In particular, the model describes the carbon balance in near steady-state systems, to be able to understand the relation between retention in present devices and the level which might be expected in fusion reactors, or precursor experiments such as ITER. The key modules of the global system describe impurity generation, their transport in and through the SOL, and core impurity transport. The codes IMPFLU, BBQ, and ITC/MIST, in order of the appearance of the processes they describe, are used to calculate the balance: IMPFLU is an adaptation of the TOKAFLU module of CAST3M [3], developed by CEA, which is a 3-D, time-dependent finite elements code which determines the thermal and mechanical properties of plasma-facing components. BBQ [4, 5] is a Monte Carlo guiding center code which describes trace impurity transport in a 3-D defined-plasma background, to calculate observables (line emission) for comparison with spectroscopy. ITC [6] and MIST [7] are radial core multi-species impurity transport codes. The modules are linked

  11. A hydrodynamics-reaction kinetics coupled model for evaluating bioreactors derived from CFD simulation.

    PubMed

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi

    2010-12-01

    Investigating how a bioreactor functions is a necessary precursor for successful reactor design and operation. Traditional methods used to investigate flow-field cannot meet this challenge accurately and economically. Hydrodynamics model can solve this problem, but to understand a bioreactor in sufficient depth, it is often insufficient. In this paper, a coupled hydrodynamics-reaction kinetics model was formulated from computational fluid dynamics (CFD) code to simulate a gas-liquid-solid three-phase biotreatment system for the first time. The hydrodynamics model is used to formulate prediction of the flow field and the reaction kinetics model then portrays the reaction conversion process. The coupled model is verified and used to simulate the behavior of an expanded granular sludge bed (EGSB) reactor for biohydrogen production. The flow patterns were visualized and analyzed. The coupled model also demonstrates a qualitative relationship between hydrodynamics and biohydrogen production. The advantages and limitations of applying this coupled model are discussed.

  12. Data Evaluation Acquired Talys 1.0 Code to Produce 111In from Various Accelerator-Based Reactions

    NASA Astrophysics Data System (ADS)

    Alipoor, Zahra; Gholamzadeh, Zohreh; Sadeghi, Mahdi; Seyyedi, Solaleh; Aref, Morteza

    The Indium-111 physical-decay parameters as a β-emitter radionuclide show some potential for radiodiagnostic and radiotherapeutic purposes. Medical investigators have shown that 111In is an important radionuclide for locating and imaging certain tumors, visualization of the lymphatic system and thousands of labeling reactions have been suggested. The TALYS 1.0 code was used here to calculate excitation functions of 112/114-118Sn+p, 110Cd+3He, 109Ag+3He, 111-114Cd+p, 110/111Cd+d, 109Ag+α to produce 111In using low and medium energy accelerators. Calculations were performed up to 200 MeV. Appropriate target thicknesses have been assumed based on energy loss calculations with the SRIM code. Theoretical integral yields for all the latter reactions were calculated. The TALYS 1.0 code predicts that the production of a few curies of 111In is feasible using a target of isotopically highly enriched 112Cd and a proton energy between 12 and 25 MeV with a production rate as 248.97 MBq·μA-1 · h-1. Minimum impurities shall be produced during the proton irradiation of an enriched 111Cd target yielding a production rate for 111In of 67.52 MBq· μA-1 · h-1.

  13. Building process knowledge using inline spectroscopy, reaction calorimetry and reaction modeling--the integrated approach.

    PubMed

    Tummala, Srinivas; Shabaker, John W; Leung, Simon S W

    2005-11-01

    For over two decades, reaction engineering tools and techniques such as reaction calorimetry, inline spectroscopy and, to a more limited extent, reaction modeling, have been employed within the pharmaceutical industry to ensure safe and robust scale-up of organic reactions. Although each of these techniques has had a significant impact on the landscape of process development, an effective integrated approach is now being realized that combines calorimetry and spectroscopy with predictive modeling tools. This paper reviews some recent advances in the use of these reaction engineering tools in process development within the pharmaceutical industry and discusses their potential impact on the effective application of the integrated approach.

  14. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...

  15. Model for reaction kinetics in pyrolysis of wood

    SciTech Connect

    Ahuja, P.; Singh, P.C.; Upadhyay, S.N.; Kumar, S.

    1996-12-31

    A reaction model for the pyrolysis of small and large particles of wood Is developed. The chemical reactions that take place when biomass is pyrolyzed are the devolatilization reactions (primary) and due to the vapour-solid interactions (secondary). In the case of small particles, when the volatiles are immediately removed by the purge gas, only primary reactions occur and the reaction model is described by weight loss and char forming reactions. The of heterogeneous secondary reactions occur in the case of large particles due to the interaction between the volatiles and the hot nascent primary char. A chain reaction mechanism of secondary char formation is proposed. The model takes both the volatiles retention time and cracking and repolymerization reactions of the vapours with the decomposing solid as well as autocatalysis into consideration. 7 refs., 3 figs., 2 tabs.

  16. A simple model of optimal population coding for sensory systems.

    PubMed

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  17. Kinetic models of gene expression including non-coding RNAs

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2011-03-01

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  18. Mixing models for the two-way-coupling of CFD codes and zero-dimensional multi-zone codes to model HCCI combustion

    SciTech Connect

    Barths, H.; Felsch, C.; Peters, N.

    2009-01-15

    The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)

  19. Mixing models for the two-way-coupling of CFD codes and zero-dimensional multi-zone codes to model HCCI combustion

    SciTech Connect

    Barths, H.; Felsch, C.; Peters, N.

    2008-11-15

    The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)

  20. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  1. Development and application of a numerical model of kinetic and equilibrium microbiological and geochemical reactions (BIOKEMOD)

    NASA Astrophysics Data System (ADS)

    Salvage, Karen M.; Yeh, Gour-Tsyh

    1998-08-01

    This paper presents the conceptual and mathematical development of the numerical model titled BIOKEMOD, and verification simulations performed using the model. BIOKEMOD is a general computer model for simulation of geochemical and microbiological reactions in batch aqueous solutions. BIOKEMOD may be coupled with hydrologic transport codes for simulation of chemically and biologically reactive transport. The chemical systems simulated may include any mixture of kinetic and equilibrium reactions. The pH, pe, and ionic strength may be specified or simulated. Chemical processes included are aqueous complexation, adsorption, ion-exchange and precipitation/dissolution. Microbiological reactions address growth of biomass and degradation of chemicals by microbial metabolism of substrates, nutrients, and electron acceptors. Inhibition or facilitation of growth due to the presence of specific chemicals and a lag period for microbial acclimation to new substrates may be simulated if significant in the system of interest. Chemical reactions controlled by equilibrium are solved using the law of mass action relating the thermodynamic equilibrium constant to the activities of the products and reactants. Kinetic chemical reactions are solved using reaction rate equations based on collision theory. Microbiologically mediated reactions for substrate removal and biomass growth are assumed to follow Monod kinetics modified for the potentially limiting effects of substrate, nutrient, and electron acceptor availability. BIOKEMOD solves the ordinary differential and algebraic equations of mixed geochemical and biogeochemical reactions using the Newton-Raphson method with full matrix pivoting. Simulations may be either steady state or transient. Input to the program includes the stoichiometry and parameters describing the relevant chemical and microbiological reactions, initial conditions, and sources/sinks for each chemical species. Output includes the chemical and biomass concentrations

  2. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    NASA Astrophysics Data System (ADS)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-08-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  3. A general paradigm to model reaction-based biogeochemical processes in batch systems

    NASA Astrophysics Data System (ADS)

    Fang, Yilin; Yeh, Gour-Tsyh; Burgos, William D.

    2003-04-01

    This paper presents the development and illustration of a numerical model of reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions. The objective is to provide a general paradigm for modeling reactive chemicals in batch systems, with expectations that it is applicable to reactive chemical transport problems. The unique aspects of the paradigm are to simultaneously (1) facilitate the segregation (isolation) of linearly independent kinetic reactions and thus enable the formulation and parameterization of individual rates one reaction by one reaction when linearly dependent kinetic reactions are absent, (2) enable the inclusion of virtually any type of equilibrium expressions and kinetic rates users want to specify, (3) reduce problem stiffness by eliminating all fast reactions from the set of ordinary differential equations governing the evolution of kinetic variables, (4) perform systematic operations to remove redundant fast reactions and irrelevant kinetic reactions, (5) systematically define chemical components and explicitly enforce mass conservation, (6) accomplish automation in decoupling fast reactions from slow reactions, and (7) increase the robustness of numerical integration of the governing equations with species switching schemes. None of the existing models to our knowledge has included these scopes simultaneously. This model (BIOGEOCHEM) is a general computer code to simulate biogeochemical processes in batch systems from a reaction-based mechanistic standpoint, and is designed to be easily coupled with transport models. To make the model applicable to a wide range of problems, programmed reaction types include aqueous complexation, adsorption-desorption, ion-exchange, oxidation-reduction, precipitation-dissolution, acid-base reactions, and microbial mediated reactions. In addition, user-specified reaction types can be programmed into the model. Any reaction can be treated as fast/equilibrium or slow

  4. Sodium spray and jet fire model development within the CONTAIN-LMR code

    SciTech Connect

    Scholtyssek, W.; Murata, K.K.

    1993-12-31

    An assessment was made of the sodium spray fire model implemented in the CONTAIN code. The original droplet burn model, which was based on the NACOM code, was improved in several aspects, especially concerning evaluation of the droplet burning rate, reaction chemistry and heat balance, spray geometry and droplet motion, and consistency with CONTAIN standards of gas property evaluation. An additional droplet burning model based on a proposal by Krolikowski was made available to include the effect of the chemical equilibrium conditions at the flame temperature. The models were validated against single-droplet burn experiments as well as spray and jet fire experiments. Reasonable agreement was found between the two burn models and experimental data. When the gas temperature in the burning compartment reaches high values, the Krolikowski model seems to be preferable. Critical parameters for spray fire evaluation were found to be the spray characterization, especially the droplet size, which largely determines the burning efficiency, and heat transfer conditions at the interface between the atmosphere and structures, which controls the thermal hydraulic behavior in the burn compartment.

  5. CODE's new solar radiation pressure model for GNSS orbit determination

    NASA Astrophysics Data System (ADS)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  6. Verification of thermal analysis codes for modeling solid rocket nozzles

    NASA Technical Reports Server (NTRS)

    Keyhani, M.

    1993-01-01

    One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher

  7. A reaction-based river/stream water quality model: Model development and numerical schemes

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C.; Jardine, Philip M.

    2008-01-01

    SummaryThis paper presents the conceptual and mathematical development of a numerical model of sediment and reactive chemical transport in rivers and streams. The distribution of mobile suspended sediments and immobile bed sediments is controlled by hydrologic transport as well as erosion and deposition processes. The fate and transport of water quality constituents involving a variety of chemical and physical processes is mathematically described by a system of reaction equations for immobile constituents and advective-dispersive-reactive transport equations for mobile constituents. To circumvent stiffness associated with equilibrium reactions, matrix decomposition is performed via Gauss-Jordan column reduction. After matrix decomposition, the system of water quality constituent reactive transport equations is transformed into a set of thermodynamic equations representing equilibrium reactions and a set of transport equations involving no equilibrium reactions. The decoupling of equilibrium and kinetic reactions enables robust numerical integration of the partial differential equations (PDEs) for non-equilibrium-variables. Solving non-equilibrium-variable transport equations instead of individual water quality constituent transport equations also reduces the number of PDEs. A variety of numerical methods are investigated for solving the mixed differential and algebraic equations. Two verification examples are compared with analytical solutions to demonstrate the correctness of the code and to illustrate the importance of employing application-dependent numerical methods to solve specific problems.

  8. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  9. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data.

  10. Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.

  11. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  12. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  13. The Local Planner's Role Under the Proposed Model Land Development Code

    ERIC Educational Resources Information Center

    Bosselman, Fred P.

    1975-01-01

    The American Law Institute's Proposed Model Land Development Code would revise basic enabling legislation for local land development planning. The code would contain guidelines for local plans that would include both long-range and short-range elements. (Author)

  14. A Generalized Kinetic Model for Heterogeneous Gas-Solid Reactions

    SciTech Connect

    Xu, Zhijie; Sun, Xin; Khaleel, Mohammad A.

    2012-08-15

    We present a generalized kinetic model for gas-solid heterogeneous reactions taking place at the interface between two phases. The model studies the reaction kinetics by taking into account the reactions at the interface, as well as the transport process within the product layer. The standard unreacted shrinking core model relies on the assumption of quasi-static diffusion that results in a steady-state concentration profile of gas reactant in the product layer. By relaxing this assumption and resolving the entire problem, general solutions can be obtained for reaction kinetics, including the reaction front velocity and the conversion (volume fraction of reacted solid). The unreacted shrinking core model is shown to be accurate and in agreement with the generalized model for slow reaction (or fast diffusion), low concentration of gas reactant, and small solid size. Otherwise, a generalized kinetic model should be used.

  15. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  16. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  17. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  18. Lyo code generator: A model-based code generator for the development of OSLC-compliant tool interfaces

    NASA Astrophysics Data System (ADS)

    El-khoury, Jad

    To promote the newly emerging OSLC (Open Services for Lifecycle Collaboration) tool interoperability standard, an open source code generator is developed that allows for the specification of OSLC-compliant tool interfaces, and from which almost complete Java code of the interface can be generated. The software takes a model-based development approach to tool interoperability, with the aim of providing modeling support for the complete development cycle of a tool interface. The software targets both OSLC developers, as well as the interoperability research community, with proven capabilities to be extended to support their corresponding needs.

  19. A computational study of pyrolysis reactions of lignin model compounds

    Treesearch

    Thomas Elder

    2010-01-01

    Enthalpies of reaction for the initial steps in the pyrolysis of lignin have been evaluated at the CBS-4m level of theory using fully substituted b-O-4 dilignols. Values for competing unimolecular decomposition reactions are consistent with results previously published for phenethyl phenyl ether models, but with lowered selectivity. Chain propagating reactions of free...

  20. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  1. Polymerization as a Model Chain Reaction

    ERIC Educational Resources Information Center

    Morton, Maurice

    1973-01-01

    Describes the features of the free radical, anionic, and cationic mechanisms of chain addition polymerization. Indicates that the nature of chain reactions can be best taught through the study of macromolecules. (CC)

  2. Polymerization as a Model Chain Reaction

    ERIC Educational Resources Information Center

    Morton, Maurice

    1973-01-01

    Describes the features of the free radical, anionic, and cationic mechanisms of chain addition polymerization. Indicates that the nature of chain reactions can be best taught through the study of macromolecules. (CC)

  3. Modelling Chemical Reasoning to Predict and Invent Reactions.

    PubMed

    Segler, Marwin H S; Waller, Mark P

    2016-11-11

    The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery.

  4. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  5. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  6. Detailed reduction of reaction mechanisms for flame modeling

    NASA Technical Reports Server (NTRS)

    Wang, Hai; Frenklach, Michael

    1991-01-01

    A method for reduction of detailed chemical reaction mechanisms, introduced earlier for ignition system, was extended to laminar premixed flames. The reduction is based on testing the reaction and reaction-enthalpy rates of the 'full' reaction mechanism using a zero-dimensional model with the flame temperature profile as a constraint. The technique is demonstrated with numerical tests performed on the mechanism of methane combustion.

  7. ROBO: a model and a code for studying the interstellar medium

    NASA Astrophysics Data System (ADS)

    Grassi, T.; Krstic, P.; Merlin, E.; Buonomo, U.; Piovan, L.; Chiosi, C.

    2011-09-01

    We present robo, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large-scale structures in the cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium-based molecules, helium, and metals). New reaction rates for the charge transfer in H+ and H2 collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates. In our model dust are formed and destroyed by several processes. The model accurately treats the cooling process, based on several physical mechanisms, and cooling functions recently reported in the literature. The model is applied to a wide range of the input parameters, and the results for important quantities describing the physical state of the gas and dust are presented. The results are organized in a database suited to the artificial neural networks (ANNs). Once trained, the ANNs yield the same results obtained by ROBO with great accuracy. We plan to develop ANNs suitably tailored for applications to NB-TSPH simulations of cosmological structures and/or galaxies.

  8. ROBO: a model and a code for studying the interstellar medium

    SciTech Connect

    Grassi, T; Krstic, Predrag S; Merlin, E; Buonomo, U; Piovan, L; Chiosi, C

    2011-01-01

    We present robo, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large-scale structures in the cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium-based molecules, helium, and metals). New reaction rates for the charge transfer in H{sup +} and H{sub 2} collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates. In our model dust are formed and destroyed by several processes. The model accurately treats the cooling process, based on several physical mechanisms, and cooling functions recently reported in the literature. The model is applied to a wide range of the input parameters, and the results for important quantities describing the physical state of the gas and dust are presented. The results are organized in a database suited to the artificial neural networks (ANNs). Once trained, the ANNs yield the same results obtained by ROBO with great accuracy. We plan to develop ANNs suitably tailored for applications to NB-TSPH simulations of cosmological structures and/or galaxies.

  9. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    SciTech Connect

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  10. A grid-based coulomb collision model for PIC codes

    SciTech Connect

    Jones, M.E.; Lemons, D.S.; Mason, R.J.; Thomas, V.A.; Winske, D.

    1996-01-01

    A new method is presented to model the intermediate regime between collisionless and Coulobm collision dominated plasmas in particle-in-cell codes. Collisional processes between particles of different species are treated throuqh the concept of a grid-based {open_quotes}collision field,{close_quotes} which can be particularly efficient for multi-dimensional applications. In this method, particles are scattered using a force which is determined from the moments of the distribution functions accumulated on the grid. The form of the force is such to reproduce themulti-fluid transport equations through the second (energy) moment. Collisions between particles of the same species require a separate treatment. For this, a Monte Carlo-like scattering method based on the Langevin equation is used. The details of both methods are presented, and their implementation in a new hybrid (particle ion, massless fluid electron) algorithm is described. Aspects of the collision model are illustrated through several one- and two-dimensional test problems as well as examples involving laser produced colliding plasmas.

  11. Reaction chain modeling of denitrification reactions during a push-pull test.

    PubMed

    Boisson, A; de Anna, P; Bour, O; Le Borgne, T; Labasque, T; Aquilina, L

    2013-05-01

    Field quantitative estimation of reaction kinetics is required to enhance our understanding of biogeochemical reactions in aquifers. We extended the analytical solution developed by Haggerty et al. (1998) to model an entire 1st order reaction chain and estimate the kinetic parameters for each reaction step of the denitrification process. We then assessed the ability of this reaction chain to model biogeochemical reactions by comparing it with experimental results from a push-pull test in a fractured crystalline aquifer (Ploemeur, French Brittany). Nitrates were used as the reactive tracer, since denitrification involves the sequential reduction of nitrates to nitrogen gas through a chain reaction (NO3(-)→NO2(-)→NO→N2O→N2) under anaerobic conditions. The kinetics of nitrate consumption and by-product formation (NO2(-), N2O) during autotrophic denitrification were quantified by using a reactive tracer (NO3(-)) and a non-reactive tracer (Br(-)). The formation of reaction by-products (NO2(-), N2O, N2) has not been previously considered using a reaction chain approach. Comparison of Br(-) and NO3(-) breakthrough curves showed that 10% of the injected NO3(-) molar mass was transformed during the 12 h experiment (2% into NO2(-), 1% into N2O and the rest into N2 and NO). Similar results, but with slower kinetics, were obtained from laboratory experiments in reactors. The good agreement between the model and the field data shows that the complete denitrification process can be efficiently modeled as a sequence of first order reactions. The 1st order kinetics coefficients obtained through modeling were as follows: k1=0.023 h(-1), k2=0.59 h(-1), k3=16 h(-1), and k4=5.5 h(-1). A next step will be to assess the variability of field reactivity using the methodology developed for modeling push-pull tracer tests.

  12. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  13. A New Approach to Model Pitch Perception Using Sparse Coding

    PubMed Central

    Furst, Miriam; Barak, Omri

    2017-01-01

    Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments. PMID:28099436

  14. A New Approach to Model Pitch Perception Using Sparse Coding.

    PubMed

    Barzelay, Oded; Furst, Miriam; Barak, Omri

    2017-01-01

    Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.

  15. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  16. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  17. Modeling the isotope effect in Walden inversion reactions

    NASA Astrophysics Data System (ADS)

    Schechter, Israel

    1991-05-01

    A simple model to explain the isotope effect in the Walden exchange reaction is suggested. It is developed in the spirit of the line-of-centers models, and considers a hard-sphere collision that transfers energy from the relative translation to the desired vibrational mode, as well as geometrical properties and steric requirements. This model reproduces the recently measured cross sections for the reactions of hydrogen with isotopic silanes and older measurements of the substitution reactions of tritium atoms with isotopic methanes. Unlike previously given explanations, this model explains the effect of the attacking atom as well as of the other participating atoms. The model provides also qualitative explanation of the measured relative yields and thresholds of CH 3T and CH 2TF from the reaction T + CH 3F. Predictions for isotope effects and cross sections of some unmeasured reactions are given.

  18. Cost effectiveness of the 1993 Model Energy Code in Colorado

    SciTech Connect

    Lucas, R.G.

    1995-06-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1993 Model Energy Code (MEC) building thermal-envelope requirements for single-family homes in Colorado. The goal of this analysis was to compare the cost effectiveness of the 1993 MEC to current construction practice in Colorado based on an objective methodology that determined the total life-cycle cost associated with complying with the 1993 MEC. This analysis was performed for the range of Colorado climates. The costs and benefits of complying with the 1993 NIEC were estimated from the consumer`s perspective. The time when the homeowner realizes net cash savings (net positive cash flow) for homes built in accordance with the 1993 MEC was estimated to vary from 0.9 year in Steamboat Springs to 2.4 years in Denver. Compliance with the 1993 MEC was estimated to increase first costs by $1190 to $2274, resulting in an incremental down payment increase of $119 to $227 (at 10% down). The net present value of all costs and benefits to the home buyer, accounting for the mortgage and taxes, varied from a savings of $1772 in Springfield to a savings of $6614 in Steamboat Springs. The ratio of benefits to costs ranged from 2.3 in Denver to 3.8 in Steamboat Springs.

  19. An Analytical Model for BDS B1 Spreading Code Self-Interference Evaluation Considering NH Code Effects

    PubMed Central

    Zhang, Xin; Zhan, Xingqun; Feng, Shaojun; Ochieng, Washington

    2017-01-01

    The short spreading code used by the BeiDou Navigation Satellite System (BDS) B1-I or GPS Coarse/Acquistiion (C/A) can cause aggregately undesirable cross-correlation between signals within each single constellation. This GPS-to-GPS or BDS-to-BDS correlation is referred to as self-interference. A GPS C/A code self-interference model is extended to propose a self-interference model for BDS B1, taking into account the unique feature of the B1-I signal transmitted by BDS medium Earth orbit (MEO) and inclined geosynchronous orbit (IGSO) satellites—an extra Neumann-Hoffmann (NH) code. Currently there is no analytical model for BDS self-interference and a simple three parameter analytical model is proposed. The model is developed by calculating the spectral separation coefficient (SSC), converting SSC to equivalent white noise power level, and then using this to calculate effective carrier-to-noise density ratio. Cyclostationarity embedded in the signal offers the proposed model additional accuracy in predicting B1-I self-interference. Hardware simulator data are used to validate the model. Software simulator data are used to show the impact of self-interference on a typical BDS receiver including the finding that self-interference effect is most significant when the differential Doppler between desired and undesired signal is zero. Simulation results show the aggregate noise caused by just two undesirable spreading codes on a single desirable signal could lift the receiver noise floor by 3.83 dB under extreme C/N0 (carrier to noise density ratio) conditions (around 20 dB-Hz). This aggregate noise has the potential to increase code tracking standard deviation by 11.65 m under low C/N0 (15–19 dB-Hz) conditions and should therefore, be avoided for high-sensitivity applications. Although the findings refer to Beidou system, the principle weakness of the short codes illuminated here are valid for other satellite navigation systems. PMID:28333120

  20. An Analytical Model for BDS B1 Spreading Code Self-Interference Evaluation Considering NH Code Effects.

    PubMed

    Zhang, Xin; Zhan, Xingqun; Feng, Shaojun; Ochieng, Washington

    2017-03-23

    The short spreading code used by the BeiDou Navigation Satellite System (BDS) B1-I or GPS Coarse/Acquistiion (C/A) can cause aggregately undesirable cross-correlation between signals within each single constellation. This GPS-to-GPS or BDS-to-BDS correlation is referred to as self-interference. A GPS C/A code self-interference model is extended to propose a self-interference model for BDS B1, taking into account the unique feature of the B1-I signal transmitted by BDS medium Earth orbit (MEO) and inclined geosynchronous orbit (IGSO) satellites-an extra Neumann-Hoffmann (NH) code. Currently there is no analytical model for BDS self-interference and a simple three parameter analytical model is proposed. The model is developed by calculating the spectral separation coefficient (SSC), converting SSC to equivalent white noise power level, and then using this to calculate effective carrier-to-noise density ratio. Cyclostationarity embedded in the signal offers the proposed model additional accuracy in predicting B1-I self-interference. Hardware simulator data are used to validate the model. Software simulator data are used to show the impact of self-interference on a typical BDS receiver including the finding that self-interference effect is most significant when the differential Doppler between desired and undesired signal is zero. Simulation results show the aggregate noise caused by just two undesirable spreading codes on a single desirable signal could lift the receiver noise floor by 3.83 dB under extreme C/N₀ (carrier to noise density ratio) conditions (around 20 dB-Hz). This aggregate noise has the potential to increase code tracking standard deviation by 11.65 m under low C/N₀ (15-19 dB-Hz) conditions and should therefore, be avoided for high-sensitivity applications. Although the findings refer to Beidou system, the principle weakness of the short codes illuminated here are valid for other satellite navigation systems.

  1. Model photo reaction centers via genetic engineering

    SciTech Connect

    Zhiyu Wang; DiMagno, T.J.; Popov, M.; Norris, J.R. |; Chikin Chan; Fleming, G.; Jau Tang; Hanson, D.; Schiffer, M.

    1992-12-31

    A series of reaction centers of Rhodococcus capsulatus isolated from a set of mutated organisms modified by site-directed mutagenesis at residues M208 and L181 are described. Changes in the amino acid at these sites affect both the energetics of the systems as well as the chemical kinetics for the initial ET event. Two empirical relations among the different mutants for the reduction potential and the ET rate are presented.

  2. Model photo reaction centers via genetic engineering

    SciTech Connect

    Zhiyu Wang; DiMagno, T.J.; Popov, M.; Norris, J.R. Chicago Univ., IL . Dept. of Chemistry); Chikin Chan; Fleming, G. . Dept. of Chemistry); Jau Tang; Hanson, D.; Schiffer, M. )

    1992-01-01

    A series of reaction centers of Rhodococcus capsulatus isolated from a set of mutated organisms modified by site-directed mutagenesis at residues M208 and L181 are described. Changes in the amino acid at these sites affect both the energetics of the systems as well as the chemical kinetics for the initial ET event. Two empirical relations among the different mutants for the reduction potential and the ET rate are presented.

  3. Effect of reactions in small eddies on biomass gasification with eddy dissipation concept - Sub-grid scale reaction model.

    PubMed

    Chen, Juhui; Yin, Weijie; Wang, Shuai; Meng, Cheng; Li, Jiuru; Qin, Bai; Yu, Guangbin

    2016-07-01

    Large-eddy simulation (LES) approach is used for gas turbulence, and eddy dissipation concept (EDC)-sub-grid scale (SGS) reaction model is employed for reactions in small eddies. The simulated gas molar fractions are in better agreement with experimental data with EDC-SGS reaction model. The effect of reactions in small eddies on biomass gasification is emphatically analyzed with EDC-SGS reaction model. The distributions of the SGS reaction rates which represent the reactions in small eddies with particles concentration and temperature are analyzed. The distributions of SGS reaction rates have the similar trend with those of total reactions rates and the values account for about 15% of the total reactions rates. The heterogeneous reaction rates with EDC-SGS reaction model are also improved during the biomass gasification process in bubbling fluidized bed.

  4. Sodium/water pool-deposit bed model of the CONACS code. [LMFBR

    SciTech Connect

    Peak, R.D.

    1983-12-17

    A new Pool-Bed model of the CONACS (Containment Analysis Code System) code represents a major advance over the pool models of other containment analysis code (NABE code of France, CEDAN code of Japan and CACECO and CONTAIN codes of the United States). This new model advances pool-bed modeling because of the number of significant materials and processes which are included with appropriate rigor. This CONACS pool-bed model maintains material balances for eight chemical species (C, H/sub 2/O, Na, NaH, Na/sub 2/O, Na/sub 2/O/sub 2/, Na/sub 2/CO/sub 3/ and NaOH) that collect in the stationary liquid pool on the floor and in the desposit bed on the elevated shelf of the standard CONACS analysis cell.

  5. Modelling couplings between reaction, fluid flow and deformation: Kinetics

    NASA Astrophysics Data System (ADS)

    Malvoisin, Benjamin; Podladchikov, Yury Y.; Connolly, James A. D.

    2016-04-01

    Mineral assemblages out of equilibrium are commonly found in metamorphic rocks testifying of the critical role of kinetics for metamorphic reactions. As experimentally determined reaction rates in fluid-saturated systems generally indicate complete reaction in less than several years, i.e. several orders of magnitude faster than field-based estimates, metamorphic reaction kinetics are generally thought to be controlled by transport rather than by processes at the mineral surface. However, some geological processes like earthquakes or slow-slip events have shorter characteristic timescales, and transport processes can be intimately related to mineral surface processes. Therefore, it is important to take into account the kinetics of mineral surface processes for modelling fluid/rock interactions. Here, a model coupling reaction, fluid flow and deformation was improved by introducing a delay in the achievement of equilibrium. The classical formalism for dissolution/precipitation reactions was used to consider the influence of the distance from equilibrium and of temperature on the reaction rate, and a dependence on porosity was introduced to model evolution of reacting surface area during reaction. The fitting of experimental data for three reactions typically occurring in metamorphic systems (serpentine dehydration, muscovite dehydration and calcite decarbonation) indicates a systematic faster kinetics close from equilibrium on the dehydration side than on the hydration side. This effect is amplified through the porosity term in the reaction rate since porosity is formed during dehydration. Numerical modelling indicates that this difference in reaction rate close from equilibrium plays a key role in microtextures formation. The developed model can be used in a wide variety of geological systems where couplings between reaction, deformation and fluid flow have to be considered.

  6. Development of reaction models for ground-water systems

    USGS Publications Warehouse

    Plummer, L.N.; Parkhurst, D.L.; Thorstenson, D.C.

    1983-01-01

    Methods are described for developing geochemical reaction models from the observed chemical compositions of ground water along a hydrologic flow path. The roles of thermodynamic speciation programs, mass balance calculations, and reaction-path simulations in developing and testing reaction models are contrasted. Electron transfer is included in the mass balance equations to properly account for redox reactions in ground water. The mass balance calculations determine net mass transfer models which must be checked against the thermodynamic calculations of speciation and reaction-path programs. Although reaction-path simulations of ground-water chemistry are thermodynamically valid, they must be checked against the net mass transfer defined by the mass balance calculations. An example is given testing multiple reaction hypotheses along a flow path in the Floridan aquifer where several reaction models are eliminated. Use of carbon and sulfur isotopic data with mass balance calculations indicates a net reaction of incongruent dissolution of dolomite (dolomite dissolution with calcite precipitation) driven irreversibly by gypsum dissolution, accompanied by minor sulfate reduction, ferric hydroxide dissolution, and pyrite precipitation in central Florida. Along the flow path, the aquifer appears to be open to CO2 initially, and open to organic carbon at more distant points down gradient. ?? 1983.

  7. Modeling Second-Order Chemical Reactions using Cellular Automata

    NASA Astrophysics Data System (ADS)

    Hunter, N. E.; Barton, C. C.; Seybold, P. G.; Rizki, M. M.

    2012-12-01

    Cellular automata (CA) are discrete, agent-based, dynamic, iterated, mathematical computational models used to describe complex physical, biological, and chemical systems. Unlike the more computationally demanding molecular dynamics and Monte Carlo approaches, which use "force fields" to model molecular interactions, CA models employ a set of local rules. The traditional approach for modeling chemical reactions is to solve a set of simultaneous differential rate equations to give deterministic outcomes. CA models yield statistical outcomes for a finite number of ingredients. The deterministic solutions appear as limiting cases for conditions such as a large number of ingredients or a finite number of ingredients and many trials. Here we present a 2-dimensional, probabilistic CA model of a second-order gas phase reaction A + B → C, using a MATLAB basis. Beginning with a random distribution of ingredients A and B, formation of C emerges as the system evolves. The reaction rate can be varied based on the probability of favorable collisions of the reagents A and B. The model permits visualization of the conversion of reagents to products, and allows one to plot concentration vs. time for A, B and C. We test hypothetical reaction conditions such as: limiting reagents, the effects of reaction probabilities, and reagent concentrations on the reaction kinetics. The deterministic solutions of the reactions emerge as statistical averages in the limit of the large number of cells in the array. Modeling results for dynamic processes in the atmosphere will be presented.

  8. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  9. Multicomponent transport with coupled geochemical and microbiological reactions: model description and example simulations

    NASA Astrophysics Data System (ADS)

    Tebes-Stevens, Caroline; J. Valocchi, Albert; VanBriesen, Jeanne M.; Rittmann, Bruce E.

    1998-08-01

    A reactive transport code (FEREACT) has been developed to examine the coupled effects of two-dimensional steady-state groundwater flow, equilibrium aqueous speciation reactions, and kinetically-controlled interphase reactions. The model uses an iterative two-step (SIA-1) solution algorithm to incorporate the effects of the geochemical and microbial reaction processes in the governing equation for solute transport in the subsurface. This SIA-1 method improves upon the convergence behavior of the traditional sequential iterative approach (SIA) through the inclusion of an additional first-order term from the Taylor Series expansion of the kinetic reaction rate expressions. The ability of FEREACT to simulate coupled reactive processes was demonstrated by modeling the transport of a radionuclide (cobalt, 60Co 2+) and an organic ligand (ethylenediaminetetraacetate, EDTA 4-) through a column packed with an iron oxide-coated sand. The reaction processes considered in this analysis included equilibrium aqueous speciation reactions and three types of kinetic reactions: adsorption, surface dissolution, and biodegradation.

  10. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  11. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python.

    PubMed

    Wils, Stefan; De Schutter, Erik

    2009-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code.

  12. (n,xnγ) cross sections on actinides versus reaction code calculations

    NASA Astrophysics Data System (ADS)

    Kerveno, Maëlle; Bacquias, Antoine; Belloni, Francesca; Borcea, Catalin; Capote, Roberto; Dessagne, Philippe; Dupuis, Marc; Henning, Greg; Hilaire, Stéphane; Kawano, Toshihiko; Nankov, Nicolas; Negret, Alexandru; Nyman, Markus; Party, Eliot; Plompen, Arjan; Romain, Pascal; Rouki, Charoula; Rudolf, Gérard; Stanoiu, Mihai

    2017-09-01

    The experimental setup GRAPhEME (GeRmanium array for Actinides PrEcise MEasurements) has been used at GELINA (EC-JRC, Geel, Belgium) to perform (n,xn γ) cross sections measurements. GRAPhEME has been especially designed to take into account the specific difficulties generated by the use of actinides samples. This work takes place in the context of new nuclear data measurements for nuclear reactor applications. Considering the very tight accuracy requested for new experimental data, special care has been paid to quantify as accurately as possible all the uncertainties from the instruments and the analysis procedure. From the precise (n,xn γ) cross sections produced with GRAPhEME, the use of model calculations is required to obtain (n,xn) cross sections. Beyond the measurements, extensive work on theoretical models is necessary to achieve a better evaluation of the (n,xn) processes. In this paper, we will discuss the final step of the 238U data analysis and present some recent results obtained on 232Th compared to TALYS modellings. A new measurement campaign on 233U has started recently, a first assessment of the recorded data will be presented.

  13. Chemical reactions simulated by ground-water-quality models

    USGS Publications Warehouse

    Grove, David B.; Stollenwerk, Kenneth G.

    1987-01-01

    Recent literature concerning the modeling of chemical reactions during transport in ground water is examined with emphasis on sorption reactions. The theory of transport and reactions in porous media has been well documented. Numerous equations have been developed from this theory, to provide both continuous and sequential or multistep models, with the water phase considered for both mobile and immobile phases. Chemical reactions can be either equilibrium or non-equilibrium, and can be quantified in linear or non-linear mathematical forms. Non-equilibrium reactions can be separated into kinetic and diffusional rate-limiting mechanisms. Solutions to the equations are available by either analytical expressions or numerical techniques. Saturated and unsaturated batch, column, and field studies are discussed with one-dimensional, laboratory-column experiments predominating. A summary table is presented that references the various kinds of models studied and their applications in predicting chemical concentrations in ground waters.

  14. Reactive radical facilitated reaction-diffusion modeling for holographic photopolymerization

    SciTech Connect

    Liu Jianhua; Pu Haihui; Gao Bin; Gao Hongyue; Yin Dejin; Dai Haitao

    2010-02-08

    A phenomenological concentration of reactive radical is proposed to take the role of curing light intensity in explicit proportion to the reaction rate for the conventional reaction-diffusion model. This revision rationally eliminates the theoretical defect of null reaction rate in modeling of the postcuring process, and facilitates the applicability of the model in the whole process of holographic photopolymerizations in photocurable monomer and nematic liquid crystal blend system. Excellent consistencies are obtained in both curing and postcuring processes between simulated and experimentally measured evolutions of the first order diffraction efficiency of the formed composite Bragg gratings.

  15. Elementary reaction modeling of solid oxide electrolysis cells: Main zones for heterogeneous chemical/electrochemical reactions

    NASA Astrophysics Data System (ADS)

    Li, Wenying; Shi, Yixiang; Luo, Yu; Cai, Ningsheng

    2015-01-01

    A theoretical model of solid oxide electrolysis cells considering the heterogeneous elementary reactions, electrochemical reactions and the transport process of mass and charge is applied to study the relative performance of H2O electrolysis, CO2 electrolysis and CO2/H2O co-electrolysis and the competitive behavior of heterogeneous chemical and electrochemical reactions. In cathode, heterogeneous chemical reactions exist near the outside surface and the electrochemical reactions occur near the electrolyte. According to the mathematical analysis, the mass transfer flux D ∇c determines the main zone size of heterogeneous chemical reactions, while the charge transfer flux σ ∇V determines the other one. When the zone size of heterogeneous chemistry is enlarged, more CO2 could react through heterogeneous chemical pathway, and polarization curves of CO2/H2O co-electrolysis could be prone to H2O electrolysis. Meanwhile, when the zone size of electrochemistry is enlarged, more CO2 could react through electrochemical pathway, and polarization curves of CO2/H2O co-electrolysis could be prone to CO2 electrolysis. The relative polarization curves, the ratio of CO2 participating in electrolysis and heterogeneous chemical reactions, the mass and charge transfer flux and heterogeneous chemical/electrochemical reaction main zones are simulated to study the effects of cathode material characteristics (porosity, particle diameter and ionic conductivity) and operating conditions (gas composition and temperature).

  16. General Description of Fission Observables: GEF Model Code

    NASA Astrophysics Data System (ADS)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  17. General Description of Fission Observables: GEF Model Code

    SciTech Connect

    Schmidt, K.-H.; Schmitt, C.

    2016-01-15

    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  18. Abundances in Astrophysical Environments: Reaction Network Simulations with Reaction Rates from Many-nucleon Modeling

    NASA Astrophysics Data System (ADS)

    Amason, Charlee; Dreyfuss, Alison; Launey, Kristina; Draayer, Jerry

    2017-01-01

    We use the ab initio (first-principle) symmetry-adapted no-core shell model (SA-NCSM) to calculate reaction rates of significance to type I X-ray burst nucleosynthesis. We consider the 18O(p,γ)19F reaction, which may influence the production of fluorine, as well as the 16O(α,γ)20Ne reaction, which is key to understanding the production of heavier elements in the universe. Results are compared to those obtained in the no-core sympletic shell model (NCSpM) with a schematic interaction. We discuss how these reaction rates affect the relevant elemental abundances. We thank the NSF for supporting this work through the REU Site in Physics & Astronomy (NSF grant #1560212) at Louisiana State University. This work was also supported by the U.S. NSF (OCI-0904874, ACI -1516338) and the U.S. DOE (DE-SC0005248).

  19. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  20. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    PubMed Central

    2011-01-01

    Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible

  1. Comparison of DSMC reaction models with QCT reaction rates for nitrogen

    NASA Astrophysics Data System (ADS)

    Wysong, Ingrid J.; Gimelshein, Sergey F.

    2016-11-01

    Four empirical models of chemical reactions extensively used in the direct simulation Monte Carlo method in the past are analyzed via comparison of temperature and vibrational level dependent equilibrium and non-equilibrium reaction rates with available classical trajectory and direct molecular simulations for nitrogen dissociation. The considered models are total collision energy, quantum kinetic, vibration-dissociation favoring, and weak vibrational bias. The weak vibrational bias model was found to provide good agreement with benchmark vibrationally-specific dissociation rates, while significant differences were observed for the others.

  2. Model-based image coding using deformable 3D model for face-to-face communications

    NASA Astrophysics Data System (ADS)

    Cai, Defu; Liang, Huiying; Wang, Xiangwen

    1994-09-01

    The model-based image coding might be the potential method for very/ultra low bit rate visual communications. However, some problems still remain for video practice, such as a finer wireframe 3-D model, precise rule for facial expressions analyzing, and automatic feature points extraction for real time application, etc. This paper proposes a feasible scheme of model-based image coding based on a deformable model which would be suitable for very/ultra low bit rates transmission. Meanwhile, some key techniques, such as automatic face feature point extraction based on a priori knowledge for real time applications and the method of AUs separation of a face on various expressions, is given.

  3. Reading and a Diffusion Model Analysis of Reaction Time

    PubMed Central

    Naples, Adam; Katz, Leonard; Grigorenko, Elena L.

    2012-01-01

    Processing speed is associated with reading performance. However, the literature is not clear either on the definition of processing speed or on why and how it contributes to reading performance. In this study we demonstrated that processing speed, as measured by reaction time, is not a unitary construct. Using the diffusion model of two-choice reaction time, we assessed processing speed in a series of same-different reaction time tasks for letter and number strings. We demonstrated that the association between reaction time and reading performance is driven by processing speed for reading-related information, but not motor or sensory encoding speed. PMID:22612543

  4. The Sugar Model: Autocatalytic Activity of the Triose Ammonia Reaction

    NASA Astrophysics Data System (ADS)

    Weber, Arthur L.

    2007-04-01

    Reaction of triose sugars with ammonia under anaerobic conditions yielded autocatalytic products. The autocatalytic behavior of the products was examined by measuring the effect of the crude triose ammonia reaction product on the kinetics of a second identical triose ammonia reaction. The reaction product showed autocatalytic activity by increasing both the rate of disappearance of triose and the rate of formation of pyruvaldehyde, the product of triose dehydration. This synthetic process is considered a reasonable model of origin-of-life chemistry because it uses plausible prebiotic substrates, and resembles modern biosynthesis by employing the energized carbon groups of sugars to drive the synthesis of autocatalytic molecules.

  5. A Kinetic Ladle Furnace Process Simulation Model: Effective Equilibrium Reaction Zone Model Using FactSage Macro Processing

    NASA Astrophysics Data System (ADS)

    Van Ende, Marie-Aline; Jung, In-Ho

    2017-02-01

    The ladle furnace (LF) is widely used in the secondary steelmaking process in particular for the de-sulfurization, alloying, and reheating of liquid steel prior to the casting process. The Effective Equilibrium Reaction Zone model using the FactSage macro processing code was applied to develop a kinetic LF process model. The slag/metal interactions, flux additions to slag, various metallic additions to steel, and arcing in the LF process were taken into account to describe the variations of chemistry and temperature of steel and slag. The LF operation data for several steel grades from different plants were accurately described using the present kinetic model.

  6. A Film Depositional Model of Permeability for Mineral Reactions in Unsaturated Media.

    SciTech Connect

    Freedman, Vicky L.; Saripalli, Prasad; Bacon, Diana H.; Meyer, Philip D.

    2004-11-15

    A new modeling approach based on the biofilm models of Taylor et al. (1990, Water Resources Research, 26, 2153-2159) has been developed for modeling changes in porosity and permeability in saturated porous media and implemented in an inorganic reactive transport code. Application of the film depositional models to mineral precipitation and dissolution reactions requires that calculations of mineral films be dynamically changing as a function of time dependent reaction processes. Since calculations of film thicknesses do not consider mineral density, results show that the film porosity model does not adequately describe volumetric changes in the porous medium. These effects can be included in permeability calculations by coupling the film permeability models (Mualem and Childs and Collis-George) to a volumetric model that incorporates both mineral density and reactive surface area. Model simulations demonstrate that an important difference between the biofilm and mineral film models is in the translation of changes in mineral radii to changes in pore space. Including the effect of tortuosity on pore radii changes improves the performance of the Mualem permeability model for both precipitation and dissolution. Results from simulation of simultaneous dissolution and secondary mineral precipitation provides reasonable estimates of porosity and permeability. Moreover, a comparison of experimental and simulated data show that the model yields qualitatively reasonable results for permeability changes due to solid-aqueous phase reactions.

  7. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    SciTech Connect

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.; Mahadevan, Sankaran

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  8. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  9. The APS SASE FEL : modeling and code comparison.

    SciTech Connect

    Biedron, S. G.

    1999-04-20

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  10. Code interoperability and standard data formats in quantum chemistry and quantum dynamics: The Q5/D5Cost data model.

    PubMed

    Rossi, Elda; Evangelisti, Stefano; Laganà, Antonio; Monari, Antonio; Rampino, Sergio; Verdicchio, Marco; Baldridge, Kim K; Bendazzoli, Gian Luigi; Borini, Stefano; Cimiraglia, Renzo; Angeli, Celestino; Kallay, Peter; Lüthi, Hans P; Ruud, Kenneth; Sanchez-Marin, José; Scemama, Anthony; Szalay, Peter G; Tajti, Attila

    2014-03-30

    Code interoperability and the search for domain-specific standard data formats represent critical issues in many areas of computational science. The advent of novel computing infrastructures such as computational grids and clouds make these issues even more urgent. The design and implementation of a common data format for quantum chemistry (QC) and quantum dynamics (QD) computer programs is discussed with reference to the research performed in the course of two Collaboration in Science and Technology Actions. The specific data models adopted, Q5Cost and D5Cost, are shown to work for a number of interoperating codes, regardless of the type and amount of information (small or large datasets) to be exchanged. The codes are either interfaced directly, or transfer data by means of wrappers; both types of data exchange are supported by the Q5/D5Cost library. Further, the exchange of data between QC and QD codes is addressed. As a proof of concept, the H + H2 reaction is discussed. The proposed scheme is shown to provide an excellent basis for cooperative code development, even across domain boundaries. Moreover, the scheme presented is found to be useful also as a production tool in the grid distributed computing environment. Copyright © 2013 Wiley Periodicals, Inc.

  11. Turing patterns in a reaction-diffusion model with the Degn-Harrison reaction scheme

    NASA Astrophysics Data System (ADS)

    Li, Shanbing; Wu, Jianhua; Dong, Yaying

    2015-09-01

    In this paper, we consider a reaction-diffusion model with Degn-Harrison reaction scheme. Some fundamental analytic properties of nonconstant positive solutions are first investigated. We next study the stability of constant steady-state solution to both ODE and PDE models. Our result also indicates that if either the size of the reactor or the effective diffusion rate is large enough, then the system does not admit nonconstant positive solutions. Finally, we establish the global structure of steady-state bifurcations from simple eigenvalues by bifurcation theory and the local structure of the steady-state bifurcations from double eigenvalues by the techniques of space decomposition and implicit function theorem.

  12. The modeling of core melting and in-vessel corium relocation in the APRIL code

    SciTech Connect

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T.

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  13. Chemical Reaction and Flow Modeling in Fullerene and Nanotube Production

    NASA Technical Reports Server (NTRS)

    Scott, Carl D.; Farhat, Samir; Greendyke, Robert B.

    2004-01-01

    addresses modeling of the arc process for fullerene and carbon nanotube production using O-D, 1-D and 2-D fluid flow models. The third part addresses simulations of the pulsed laser ablation process using time-dependent techniques in 2-D, and a steady state 2-D simulation of a continuous laser ablation process. The fourth part addresses steady state modeling in O-D and 2-D of the HiPco process. In each of the simulations, there is a variety of simplifications that are made that enable one to concentrate on one aspect or another of the process. There are simplifications that can be made to the chemical reaction models , e.g. reduction in number of species by lumping some of them together in a representative species. Other simulations are carried out by eliminating the chemistry altogether in order to concentrate on the fluid dynamics. When solving problems with a large number of species in more than one spatial dimension, it is almost imperative that the problem be decoupled by solving for the fluid dynamics to find the fluid motion and temperature history of "particles" of fluid moving through a reactor. Then one can solve the chemical rate equations with complex chemistry following the temperature and pressure history. One difficulty is that often mixing with an ambient gas is involved. Therefore, one needs to take dilution and mixing into account. This changes the ratio of carbon species to background gas. Commercially available codes may have no provision for including dilution as part of the input. One must the write special solvers for including dilution in decoupled problems. The article addresses both ful1erene production and single-walled carbon nanotube (SWNT) production. There are at least two schemes or concepts of SWNT growth. This article will only address growth in the gas phase by carbon and catalyst cluster growth and SW T formation by the addition of carbon. There are other models that conceive of SWNT growth as a phase separation process from clusters me

  14. Modified version of the combined model of photonucleon reactions

    SciTech Connect

    Ishkhanov, B. S.; Orlin, V. N.

    2015-07-15

    A refined version of the combined photonucleon-reaction model is described. This version makes it possible to take into account the effect of structural features of the doorway dipole state on photonucleon reactions in the energy range of E{sub γ} ≤ 30 MeV. In relation to the previous version of the model, the treatment of isospin effects at the preequilibrium and evaporation reaction stages is refined; in addition, the description of the semidirect effect caused by nucleon emission from the doorway dipole state is improved. The model in question is used to study photonucleon reactions on the isotopes {sup 35-56}Ca and {sup 102-134}Sn in the energy range indicated above.

  15. Quantum Chemical Modeling of the Dehalogenation Reaction of Haloalcohol Dehalogenase.

    PubMed

    Hopmann, Kathrin H; Himo, Fahmi

    2008-07-01

    The dehalogenation reaction of haloalcohol dehalogenase HheC from Agrobacterium radiobacter AD1 was investigated theoretically using hybrid density functional theory methods. HheC catalyzes the enantioselective conversion of halohydrins into their corresponding epoxides. The reaction is proposed to be mediated by a catalytic Ser132-Tyr145-Arg149 triad, and a distinct halide binding site is suggested to facilitate halide displacement by stabilizing the free ion. We investigated the HheC-mediated dehalogenation of (R)-2-chloro-1-phenylethanol using three quantum chemical models of various sizes. The calculated barriers and reaction energies give support to the suggested reaction mechanism. The dehalogenation occurs in a single concerted step, in which Tyr145 abstracts a proton from the halohydrin substrate and the substrate oxyanion displaces the chloride ion, forming the epoxide. Characterization of the involved stationary points is provided. Furthermore, by using three different models of the halide binding site, we are able to assess the adopted modeling methodology.

  16. Reaction-to-fire testing and modeling for wood products

    Treesearch

    Mark A. Dietenberger; Robert H. White

    2001-01-01

    In this review we primarily discuss our use of the oxygen consumption calorimeter (ASTM E1354 for cone calorimeter and ISO9705 for room/corner tests) and fire growth modeling to evaluate treated wood products. With recent development towards performance-based building codes, new methodology requires engineering calculations of various fire growth scenarios. The initial...

  17. Implicit solvation model for density-functional study of nanocrystal surfaces and reaction pathways

    NASA Astrophysics Data System (ADS)

    Mathew, Kiran; Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2014-02-01

    Solid-liquid interfaces are at the heart of many modern-day technologies and provide a challenge to many materials simulation methods. A realistic first-principles computational study of such systems entails the inclusion of solvent effects. In this work, we implement an implicit solvation model that has a firm theoretical foundation into the widely used density-functional code Vienna ab initio Software Package. The implicit solvation model follows the framework of joint density functional theory. We describe the framework, our algorithm and implementation, and benchmarks for small molecular systems. We apply the solvation model to study the surface energies of different facets of semiconducting and metallic nanocrystals and the SN2 reaction pathway. We find that solvation reduces the surface energies of the nanocrystals, especially for the semiconducting ones and increases the energy barrier of the SN2 reaction.

  18. Regimes of chemical reaction waves initiated by nonuniform initial conditions for detailed chemical reaction models.

    PubMed

    Liberman, M A; Kiverin, A D; Ivanov, M F

    2012-05-01

    Regimes of chemical reaction wave propagation initiated by initial temperature nonuniformity in gaseous mixtures, whose chemistry is governed by chain-branching kinetics, are studied using a multispecies transport model and a detailed chemical model. Possible regimes of reaction wave propagation are identified for stoichiometric hydrogen-oxygen and hydrogen-air mixtures in a wide range of initial pressures and temperature levels, depending on the initial non-uniformity steepness. The limits of the regimes of reaction wave propagation depend upon the values of the spontaneous wave speed and the characteristic velocities of the problem. It is shown that one-step kinetics cannot reproduce either quantitative neither qualitative features of the ignition process in real gaseous mixtures because the difference between the induction time and the time when the exothermic reaction begins significantly affects the ignition, evolution, and coupling of the spontaneous reaction wave and the pressure wave, especially at lower temperatures. We show that all the regimes initiated by the temperature gradient occur for much shallower temperature gradients than predicted by a one-step model. The difference is very large for lower initial pressures and for slowly reacting mixtures. In this way the paper provides an answer to questions, important in practice, about the ignition energy, its distribution, and the scale of the initial nonuniformity required for ignition in one or another regime of combustion wave propagation.

  19. Regimes of chemical reaction waves initiated by nonuniform initial conditions for detailed chemical reaction models

    NASA Astrophysics Data System (ADS)

    Liberman, M. A.; Kiverin, A. D.; Ivanov, M. F.

    2012-05-01

    Regimes of chemical reaction wave propagation initiated by initial temperature nonuniformity in gaseous mixtures, whose chemistry is governed by chain-branching kinetics, are studied using a multispecies transport model and a detailed chemical model. Possible regimes of reaction wave propagation are identified for stoichiometric hydrogen-oxygen and hydrogen-air mixtures in a wide range of initial pressures and temperature levels, depending on the initial non-uniformity steepness. The limits of the regimes of reaction wave propagation depend upon the values of the spontaneous wave speed and the characteristic velocities of the problem. It is shown that one-step kinetics cannot reproduce either quantitative neither qualitative features of the ignition process in real gaseous mixtures because the difference between the induction time and the time when the exothermic reaction begins significantly affects the ignition, evolution, and coupling of the spontaneous reaction wave and the pressure wave, especially at lower temperatures. We show that all the regimes initiated by the temperature gradient occur for much shallower temperature gradients than predicted by a one-step model. The difference is very large for lower initial pressures and for slowly reacting mixtures. In this way the paper provides an answer to questions, important in practice, about the ignition energy, its distribution, and the scale of the initial nonuniformity required for ignition in one or another regime of combustion wave propagation.

  20. Uncertainty Quantification and Learning in Geophysical Modeling: How Information is Coded into Dynamical Models

    NASA Astrophysics Data System (ADS)

    Gupta, H. V.

    2014-12-01

    There is a clear need for comprehensive quantification of simulation uncertainty when using geophysical models to support and inform decision-making. Further, it is clear that the nature of such uncertainty depends on the quality of information in (a) the forcing data (driver information), (b) the model code (prior information), and (c) the specific values of inferred model components that localize the model to the system of interest (inferred information). Of course, the relative quality of each varies with geophysical discipline and specific application. In this talk I will discuss a structured approach to characterizing how 'Information', and hence 'Uncertainty', is coded into the structures of physics-based geophysical models. I propose that a better understanding of what is meant by "Information", and how it is embodied in models and data, can offer a structured (less ad-hoc), robust and insightful basis for diagnostic learning through the model-data juxtaposition. In some fields, a natural consequence may be to emphasize the a priori role of System Architecture (Process Modeling) over that of the selection of System Parameterization, thereby emphasizing the more creative aspect of scientific investigation - the use of models for Discovery and Learning.

  1. A high burnup model developed for the DIONISIO code

    NASA Astrophysics Data System (ADS)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  2. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  3. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  4. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  5. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  6. Molecular Detection of Methicillin-Resistant Staphylococcus aureus by Non-Protein Coding RNA-Mediated Monoplex Polymerase Chain Reaction

    PubMed Central

    Soo Yean, Cheryl Yeap; Selva Raju, Kishanraj; Xavier, Rathinam; Subramaniam, Sreeramanan; Gopinath, Subash C. B.; Chinni, Suresh V.

    2016-01-01

    Non-protein coding RNA (npcRNA) is a functional RNA molecule that is not translated into a protein. Bacterial npcRNAs are structurally diversified molecules, typically 50–200 nucleotides in length. They play a crucial physiological role in cellular networking, including stress responses, replication and bacterial virulence. In this study, by using an identified npcRNA gene (Sau-02) in Methicillin-resistant Staphylococcus aureus (MRSA), we identified the Gram-positive bacteria S. aureus. A Sau-02-mediated monoplex Polymerase Chain Reaction (PCR) assay was designed that displayed high sensitivity and specificity. Fourteen different bacteria and 18 S. aureus strains were tested, and the results showed that the Sau-02 gene is specific to S. aureus. The detection limit was tested against genomic DNA from MRSA and was found to be ~10 genome copies. Further, the detection was extended to whole-cell MRSA detection, and we reached the detection limit with two bacteria. The monoplex PCR assay demonstrated in this study is a novel detection method that can replicate other npcRNA-mediated detection assays. PMID:27367909

  7. Strong plasma screening in thermonuclear reactions: Electron drop model

    NASA Astrophysics Data System (ADS)

    Kravchuk, P. A.; Yakovlev, D. G.

    2014-01-01

    We analyze enhancement of thermonuclear fusion reactions due to strong plasma screening in dense matter using a simple electron drop model. In the model we assume fusion in a potential that is screened by an effective electron cloud around colliding nuclei (extended Salpeter ion-sphere model). We calculate the mean-field screened Coulomb potentials for atomic nuclei with equal and nonequal charges, appropriate astrophysical S factors, and enhancement factors of reaction rates. As a byproduct, we study the analytic behavior of the screening potential at small separations between the reactants. In this model, astrophysical S factors depend not only on nuclear physics but on plasma screening as well. The enhancement factors are in good agreement with calculations by other methods. This allows us to formulate a combined, pure analytic model of strong plasma screening in thermonuclear reactions. The results can be useful for simulating nuclear burning in white dwarfs and neutron stars.

  8. A vectorized Monte Carlo code for modeling photon transport in SPECT

    SciTech Connect

    Smith, M.F. ); Floyd, C.E. Jr.; Jaszczak, R.J. Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 )

    1993-07-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT.

  9. Knockout reactions on p-shell nuclei for tests of structure and reaction models

    NASA Astrophysics Data System (ADS)

    Kuchera, A. N.; Bazin, D.; Babo, M.; Baumann, T.; Bowry, M.; Bradt, J.; Brown, J.; Deyoung, P. A.; Elman, B.; Finck, J. E.; Gade, A.; Grinyer, G. F.; Jones, M. D.; Lunderberg, E.; Redpath, T.; Rogers, W. F.; Stiefel, K.; Thoennessen, M.; Weisshaar, D.; Whitmore, K.

    2015-10-01

    A series of knockout reactions on p-shell nuclei were studied to extract exclusive cross sections and to investigate the neutron knockout mechanism. The measured cross sections provide stringent tests of shell model and ab initio calculations while measurements of neutron+residual coincidences test the accuracy and validity of reaction models used to predict cross sections. Six different beams ranging from A = 7 to 12 were produced at the NSCL totaling measurements of nine different reaction settings. The reaction settings were determined by the magnetic field of the Sweeper magnet which bends the residues into charged particle detectors. The reaction target was surrounded by the high efficiency CsI array, CAESAR, to tag gamma rays for cross section measurements of low-lying excited states. Additionally, knocked out neutrons were detected with MoNA-LISA in coincidence with the charged residuals. Preliminary results will be discussed. This work is partially supported by the National Science Foundation under Grant No. PHY11-02511 and the Department of Energy National Nuclear Security Administration under Award No. DE-NA0000979.

  10. A convolutional code-based sequence analysis model and its application.

    PubMed

    Liu, Xiao; Geng, Xiaoli

    2013-04-16

    A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.

  11. A Lattice Boltzmann Model for Oscillating Reaction-Diffusion

    NASA Astrophysics Data System (ADS)

    Rodríguez-Romo, Suemi; Ibañez-Orozco, Oscar; Sosa-Herrera, Antonio

    2016-07-01

    A computational algorithm based on the lattice Boltzmann method (LBM) is proposed to model reaction-diffusion systems. In this paper, we focus on how nonlinear chemical oscillators like Belousov-Zhabotinsky (BZ) and the chlorite-iodide-malonic acid (CIMA) reactions can be modeled by LBM and provide with new insight into the nature and applications of oscillating reactions. We use Gaussian pulse initial concentrations of sulfuric acid in different places of a bidimensional reactor and nondiffusive boundary walls. We clearly show how these systems evolve to a chaotic attractor and produce specific pattern images that are portrayed in the reactions trajectory to the corresponding chaotic attractor and can be used in robotic control.

  12. Designer substrate library for quantitative, predictive modeling of reaction performance

    PubMed Central

    Bess, Elizabeth N.; Bischoff, Amanda J.; Sigman, Matthew S.

    2014-01-01

    Assessment of reaction substrate scope is often a qualitative endeavor that provides general indications of substrate sensitivity to a measured reaction outcome. Unfortunately, this field standard typically falls short of enabling the quantitative prediction of new substrates’ performance. The disconnection between a reaction’s development and the quantitative prediction of new substrates’ behavior limits the applicative usefulness of many methodologies. Herein, we present a method by which substrate libraries can be systematically developed to enable quantitative modeling of reaction systems and the prediction of new reaction outcomes. Presented in the context of rhodium-catalyzed asymmetric transfer hydrogenation, these models quantify the molecular features that influence enantioselection and, in so doing, lend mechanistic insight to the modes of asymmetric induction. PMID:25267648

  13. Uncertainty quantification for quantum chemical models of complex reaction networks.

    PubMed

    Proppe, Jonny; Husch, Tamara; Simm, Gregor N; Reiher, Markus

    2016-12-22

    For the quantitative understanding of complex chemical reaction mechanisms, it is, in general, necessary to accurately determine the corresponding free energy surface and to solve the resulting continuous-time reaction rate equations for a continuous state space. For a general (complex) reaction network, it is computationally hard to fulfill these two requirements. However, it is possible to approximately address these challenges in a physically consistent way. On the one hand, it may be sufficient to consider approximate free energies if a reliable uncertainty measure can be provided. On the other hand, a highly resolved time evolution may not be necessary to still determine quantitative fluxes in a reaction network if one is interested in specific time scales. In this paper, we present discrete-time kinetic simulations in discrete state space taking free energy uncertainties into account. The method builds upon thermo-chemical data obtained from electronic structure calculations in a condensed-phase model. Our kinetic approach supports the analysis of general reaction networks spanning multiple time scales, which is here demonstrated for the example of the formose reaction. An important application of our approach is the detection of regions in a reaction network which require further investigation, given the uncertainties introduced by both approximate electronic structure methods and kinetic models. Such cases can then be studied in greater detail with more sophisticated first-principles calculations and kinetic simulations.

  14. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    ERIC Educational Resources Information Center

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  15. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    ERIC Educational Resources Information Center

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  16. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    SciTech Connect

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-05-24

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized.

  17. Modelling the biogeochemical cycle of silicon in soils using the reactive transport code MIN3P

    NASA Astrophysics Data System (ADS)

    Gerard, F.; Mayer, K. U.; Hodson, M. J.; Meunier, J.

    2006-12-01

    We investigated the biogeochemical cycling of Si in an acidic brown soil covered by a coniferous forest (Douglas fir) based on a comprehensive data set and reactive transport modelling. Both published and original data enable us to make up a conceptual model on which the development of a numerical model is based. We modified the reactive transport code MIN3P, which solves thermodynamic and kinetic reactions coupled with vadose zone flow and solute transport. Simulations were performed for a one-dimensional heterogeneous soil profile and were constrained by observed data including daily soil temperature, plant transpiration, throughfall, and dissolved Si in solutions collected beneath the organic layer. Reactive transport modelling was first used to test the validity of the hypothesis that a dynamic balance between Si uptake by plants and release by weathering controls aqueous Si-concentrations. We were able to calibrate the model quite accurately by stepwise adjustment of the relevant parameters. The capability of the model to predict Si-concentrations was good. Mass balance calculations indicate that only 40% of the biogeochemical cycle of Si is controlled by weathering and that about 60% of Si-cycling is related to biological processes (i.e. Si uptake by plants and dissolution of biogenic Si). Such a large contribution of biological processes was not anticipated considering the temperate climate regime, but may be explained by the high biomass productivity of the planted coniferous species. The large contribution of passive Si-uptake by vegetation permits the conservation of seasonal concentration variations caused by temperature-induced weathering, although the modelling suggests that the latter process was of lesser importance relative to biological Si-cycling.

  18. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  19. Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation

    DTIC Science & Technology

    2009-05-20

    Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation Shanna-Shaye Forbes Electrical Engineering and Computer Sciences...MAY 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Real-time C Code Generation in Ptolemy II for the Giotto...periodic and there are multiple modes of operation. Ptolemy II is a university based open source modeling and simulation framework that supports model

  20. A practical guide to modelling enzyme-catalysed reactions

    PubMed Central

    Lonsdale, Richard; Harvey, Jeremy N.; Mulholland, Adrian J.

    2012-01-01

    Molecular modelling and simulation methods are increasingly at the forefront of elucidating mechanisms of enzyme-catalysed reactions, and shedding light on the determinants of specificity and efficiency of catalysis. These methods have the potential to assist in drug discovery and the design of novel protein catalysts. This Tutorial Review highlights some of the most widely used modelling methods and some successful applications. Modelling protocols commonly applied in studying enzyme-catalysed reactions are outlined here, and some practical implications are considered, with cytochrome P450 enzymes used as a specific example. PMID:22278388

  1. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  2. Computerized reduction of elementary reaction sets for CFD combustion modeling

    NASA Technical Reports Server (NTRS)

    Wikstrom, Carl V.

    1992-01-01

    Modeling of chemistry in Computational Fluid Dynamics can be the most time-consuming aspect of many applications. If the entire set of elementary reactions is to be solved, a set of stiff ordinary differential equations must be integrated. Some of the reactions take place at very high rates, requiring short time steps, while others take place more slowly and make little progress in the short time step integration.

  3. Recent Developments of the Liège Intranuclear Cascade Model in View of its Use into High-energy Transport Codes

    NASA Astrophysics Data System (ADS)

    Leray, S.; Boudard, A.; Braunn, B.; Cugnon, J.; David, J. C.; Leprince, A.; Mancusi, D.

    2014-04-01

    Recent extensions of the Liège Intranuclear Cascade model, INCL, at energies below 100 MeV and for light-ion (up to oxygen) induced reactions are reported. Comparisons with relevant experimental data are shown. The model has been implemented into several high-energy transport codes allowing simulations in a wide domain of applications. Examples of simulations performed for spallation targets with the model implemented into MCNPX and in the domain of medical applications with GEANT4 are presented.

  4. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    file is text and could be created by the user, the format is very exacting and difficult to get correct. This makes BEASY GiD very useful. Rhino3D...user can manually write the text material data file, but it is exceedingly difficult to get the format precisely right. Figure 4-1: Screen shot of...code that is an add-on to the software SolidWorks [8]. It runs on Windows on a laptop, desktop, or workstation. It is not portable to Macintosh or

  5. An Investigation of Model Catalyzed Hydrocarbon Formation Reactions

    SciTech Connect

    Tysoe, W. T.

    2001-05-02

    Work was focused on two areas aimed at understanding the chemistry of realistic catalytic systems: (1) The synthesis and characterization of model supported olefin metathesis catalysts. (2) Understanding the role of the carbonaceous layer present on Pd(111) single crystal model catalysts during reaction.

  6. Simple Reaction Time and Statistical Facilitation: A Parallel Grains Model

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf

    2003-01-01

    A race-like model is developed to account for various phenomena arising in simple reaction time (RT) tasks. Within the model, each stimulus is represented by a number of grains of information or activation processed in parallel. The stimulus is detected when a criterion number of activated grains reaches a decision center. Using the concept of…

  7. A mathematical model for foreign body reactions in 2D

    PubMed Central

    Su, Jianzhong; Gonzales, Humberto Perez; Todorov, Michail; Kojouharov, Hristo; Tang, Liping

    2010-01-01

    The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general. PMID:21532988

  8. OntoADR a semantic resource describing adverse drug reactions to support searching, coding, and information retrieval.

    PubMed

    Souvignet, Julien; Declerck, Gunnar; Asfari, Hadyl; Jaulent, Marie-Christine; Bousquet, Cédric

    2016-10-01

    Efficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR. The method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts. We built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest. The methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal

  9. Calibration of reaction rates for the CREST reactive-burn model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline

    2015-06-01

    In recent years, the hydrocode-based CREST reactive-burn model has had success in modelling a range of shock initiation and detonation propagation phenomena in polymer bonded explosives. CREST uses empirical reaction rates that depend on a function of the entropy of the non-reacted explosive, allowing the effects of initial temperature, porosity and double-shock desensitisation to be simulated without any modifications to the model. Until now, the sixteen reaction-rate coefficients have been manually calibrated by trial and error, using hydrocode simulations of a subset of sustained-shock initiation gas-gun experiments and the detonation size-effect curve for the explosive. This paper will describe the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using the well-established Particle Swarm Optimisation (PSO) technique. The automatic method submits multiple hydrocode simulations for each ``particle'' and analyses the results to determine the ``misfit'' to gas-gun and size-effect data. Over ~40 ``generations,'' the PSO code finds a best set of reaction-rate coefficients that minimises the misfit. The method will be demonstrated by developing a new CREST model for EDC32, a conventional high explosive.

  10. A Computer Code for the Calculation of NLTE Model Atmospheres Using ALI

    NASA Astrophysics Data System (ADS)

    Kubát, J.

    2003-01-01

    A code for calculation of NLTE model atmospheres in hydrostatic and radiative equilibrium in either spherically symmetric or plane parallel geometry is described. The method of accelerated lambda iteration is used for the treatment of radiative transfer. Other equations (hydrostatic equilibrium, radiative equilibrium, statistical equilibrium, optical depth) are solved using the Newton-Raphson method (linearization). In addition to the standard output of the model atmosphere (dependence of temperature, density, radius, and population numbers on column mass depth) the code enables optional additional outputs for better understanding of processes in the atmosphere. The code is able to calculate model atmospheres of plane-parallel and spherically symmetric semi-infinite atmospheres as well as models of plane parallel and spherical shells. There is also an option for solution of a restricted problem of a NLTE line formation (solution of radiative transfer and statistical equilibrium for a given model atmosphere). The overall scheme of the code is presented.

  11. Model of defect reactions and the influence of clustering in pulse-neutron-irradiated Si

    SciTech Connect

    Myers, S. M.; Cooper, P. J.; Wampler, W. R.

    2008-08-15

    Transient reactions among irradiation defects, dopants, impurities, and carriers in pulse-neutron-irradiated Si were modeled taking into account the clustering of the primal defects in recoil cascades. Continuum equations describing the diffusion, field drift, and reactions of relevant species were numerically solved for a submicrometer spherical volume, within which the starting radial distributions of defects could be varied in accord with the degree of clustering. The radial profiles corresponding to neutron irradiation were chosen through pair-correlation-function analysis of vacancy and interstitial distributions obtained from the binary-collision code MARLOWE, using a spectrum of primary recoil energies computed for a fast-burst fission reactor. Model predictions of transient behavior were compared with a variety of experimental results from irradiated bulk Si, solar cells, and bipolar-junction transistors. The influence of defect clustering during neutron bombardment was further distinguished through contrast with electron irradiation, where the primal point defects are more uniformly dispersed.

  12. A model for non-monotonic intensity coding

    PubMed Central

    Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V. M.; Yarali, Ayse

    2015-01-01

    Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus–response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666

  13. Pattern-based video coding with dynamic background modeling

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    The existing video coding standard H.264 could not provide expected rate-distortion (RD) performance for macroblocks (MBs) with both moving objects and static background and the MBs with uncovered background (previously occluded). The pattern-based video coding (PVC) technique partially addresses the first problem by separating and encoding moving area and skipping background area at block level using binary pattern templates. However, the existing PVC schemes could not outperform the H.264 with significant margin at high bit rates due to the least number of MBs classified using the pattern mode. Moreover, both H.264 and the PVC scheme could not provide the expected RD performance for the uncovered background areas due to the unavailability of the reference areas in the existing approaches. In this paper, we propose a new PVC technique which will use the most common frame in a scene (McFIS) as a reference frame to overcome the problems. Apart from the use of McFIS as a reference frame, we also introduce a content-dependent pattern generation strategy for better RD performance. The experimental results confirm the superiority of the proposed schemes in comparison with the existing PVC and the McFIS-based methods by achieving significant image quality gain at a wide range of bit rates.

  14. Higher-order ionosphere modeling for CODE's next reprocessing activities

    NASA Astrophysics Data System (ADS)

    Lutz, S.; Schaer, S.; Meindl, M.; Dach, R.; Steigenberger, P.

    2009-12-01

    CODE (the Center for Orbit Determination in Europe) is a joint venture between the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Agency for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Institut für Astronomische und Phsyikalische Geodäsie of the Technische Universität München (IAPG/TUM, Munich, Germany). It acts as one of the global analysis centers of the International GNSS Service (IGS) and participates in the first IGS reprocessing campaign, a full reanalysis of GPS data collected since 1994. For a future reanalyis of the IGS data it is planned to consider not only first-order but also higher-order ionosphere terms in the space geodetic observations. There are several works (e.g. Fritsche et al. 2005), which showed a significant and systematic influence of these effects on the analysis results. The development version of the Bernese Software used at CODE is expanded by the ability to assign additional (scaling) parameters to each considered higher-order ionosphere term. By this, each correction term can be switched on and off on normal-equation level and, moreover, the significance of each correction term may be verified on observation level for different ionosphere conditions.

  15. Molecular Modeling of the Reaction Pathway and Hydride Transfer Reactions of HMG-CoA Reductase

    PubMed Central

    Haines, Brandon E.; Steussy, C. Nicklaus; Stauffacher, Cynthia V.; Wiest, Olaf

    2012-01-01

    HMG-CoA reductase catalyzes the four electron reduction of HMG-CoA to mevalonate and is an enzyme of considerable biomedical relevance due to the impact of its statin inhibitors on public health. Although the reaction has been studied extensively using x-ray crystallography, there are surprisingly no computational studies that test the mechanistic hypotheses suggested for this complex reaction. Theozyme and QM/MM calculations up to the B3LYP/6-31g(d,p)//B3LYP/6-311++g(2d,2p) level of theory were employed to generate an atomistic description of the enzymatic reaction process and its energy profile. The models generated here predict that the catalytically important Glu83 is protonated prior to hydride transfer and that it acts as the general acid/base in the reaction. With Glu83 protonated, the activation energy calculated for the sequential hydride transfer reactions, 21.8 and 19.3 kcal/mol, are in qualitative agreement with the experimentally determined rate constant for the entire reaction (1/s–1/min). When Glu83 is not protonated, the first hydride transfer reaction is predicted to be disfavored by over 20 kcal/mol, and the activation energy is predicted to be higher by over 10 kcal/mol. While not involved in the reaction as an acid/base, Lys267 is critical for stabilization of the transition state in forming an oxyanion hole with the protonated Glu83. Molecular dynamics simulations and MM/PBSA free energy calculations predict that the enzyme active site stabilizes the hemithioacetal intermediate better than the aldehyde intermediate. This suggests a mechanism where cofactor exchange occurs before the breakdown of the hemithioacetal. Slowing the conversion to aldehyde would provide the enzyme with a mechanism to protect it from solvent and explain why the free aldehyde is not observed experimentally. Our results support the hypothesis that the pKa of an active site acidic group is modulated by the redox state of the cofactor. The oxidized cofactor and

  16. Dysregulation of REST-regulated coding and non-coding RNAs in a cellular model of Huntington's disease.

    PubMed

    Soldati, Chiara; Bithell, Angela; Johnston, Caroline; Wong, Kee-Yew; Stanton, Lawrence W; Buckley, Noel J

    2013-02-01

    Huntingtin (Htt) protein interacts with many transcriptional regulators, with widespread disruption to the transcriptome in Huntington's disease (HD) brought about by altered interactions with the mutant Htt (muHtt) protein. Repressor Element-1 Silencing Transcription Factor (REST) is a repressor whose association with Htt in the cytoplasm is disrupted in HD, leading to increased nuclear REST and concomitant repression of several neuronal-specific genes, including brain-derived neurotrophic factor (Bdnf). Here, we explored a wide set of HD dysregulated genes to identify direct REST targets whose expression is altered in a cellular model of HD but that can be rescued by knock-down of REST activity. We found many direct REST target genes encoding proteins important for nervous system development, including a cohort involved in synaptic transmission, at least two of which can be rescued at the protein level by REST knock-down. We also identified several microRNAs (miRNAs) whose aberrant repression is directly mediated by REST, including miR-137, which has not previously been shown to be a direct REST target in mouse. These data provide evidence of the contribution of inappropriate REST-mediated transcriptional repression to the widespread changes in coding and non-coding gene expression in a cellular model of HD that may affect normal neuronal function and survival.

  17. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  18. Development and Calibration of Reaction Models for Multilayered Nanocomposites

    NASA Astrophysics Data System (ADS)

    Vohra, Manav

    This dissertation focuses on the development and calibration of reaction models for multilayered nanocomposites. The nanocomposites comprise sputter deposited alternating layers of distinct metallic elements. Specifically, we focus on the equimolar Ni-Al and Zr-Al multilayered systems. Computational models are developed to capture the transient reaction phenomena as well as understand the dependence of reaction properties on the microstructure, composition and geometry of the multilayers. Together with the available experimental data, simulations are used to calibrate the models and enhance the accuracy of their predictions. Recent modeling efforts for the Ni-Al system have investigated the nature of self-propagating reactions in the multilayers. Model fidelity was enhanced by incorporating melting effects due to aluminum [Besnoin et al. (2002)]. Salloum and Knio formulated a reduced model to mitigate computational costs associated with multi-dimensional reaction simulations [Salloum and Knio (2010a)]. However, exist- ing formulations relied on a single Arrhenius correlation for diffusivity, estimated for the self-propagating reactions, and cannot be used to quantify mixing rates at lower temperatures within reasonable accuracy [Fritz (2011)]. We thus develop a thermal model for a multilayer stack comprising a reactive Ni-Al bilayer (nanocalorimeter) and exploit temperature evolution measurements to calibrate the diffusion parameters associated with solid state mixing (≈720 K - 860 K) in the bilayer. The equimolar Zr-Al multilayered system when reacted aerobically is shown to exhibit slow aerobic oxidation of zirconium (in the intermetallic), sustained for about 2-10 seconds after completion of the formation reaction. In a collaborative effort, we aim to exploit the sustained heat release for bio-agent defeat application. A simplified computational model is developed to capture the extended reaction regime characterized by oxidation of Zr-Al multilayers

  19. Chemical and mathematical modeling of asphaltene reaction pathways

    SciTech Connect

    Salvage, P.E.

    1986-01-01

    Precipitated asphaltene was subjected to pyrolysis and hydropyrolysis, both neat and in solvents, and catalytic hydroprocessing. A solvent extraction procedure defined gas, maltene, asphaltene, and coke product fractions. The apparent first order rate constant for asphaltene conversion at 400/sup 0/C was relatively insensitive to the particular reaction scheme. The yield of gases likewise showed little variation and was always less than 10%. On the other hand, the maltene and coke yields were about 20% and 60%, respectively, from neat pyrolysis, and about 60% and less than 5%, respectively, from catalytic reactions. The temporal variations of the product fractions allowed discernment of asphaltene reaction pathways. The primary reaction of asphaltene was to residual asphaltene, maltenes, and gases. The residual asphaltene reacted thermally to coke and catalytically to maltenes at the expense of coke. Secondary degradation of these primary products led to lighter compounds. Reaction mechanism for pyrolysis of asphaltene model compounds and alkylaromstics were determined. The model compound kinetics results were combined with a stochastic description of asphaltene structure in a mathematical model of asphaltene pyrolysis. Individual molecular product were assigned to either the gas, maltene, asphaltene, or coke product fractions, and summation of the weights of each constituted the model's predictions. The temporal variation of the product fractions from simulated asphaltene pyrolysis compared favorably with experimental results.

  20. First principles based mean field model for oxygen reduction reaction.

    PubMed

    Jinnouchi, Ryosuke; Kodama, Kensaku; Hatanaka, Tatsuya; Morimoto, Yu

    2011-12-21

    A first principles-based mean field model was developed for the oxygen reduction reaction (ORR) taking account of the coverage- and material-dependent reversible potentials of the elementary steps. This model was applied to the simulation of single crystal surfaces of Pt, Pt alloy and Pt core-shell catalysts under Ar and O(2) atmospheres. The results are consistent with those shown by past experimental and theoretical studies on surface coverages under Ar atmosphere, the shape of the current-voltage curve for the ORR on Pt(111) and the material-dependence of the ORR activity. This model suggests that the oxygen associative pathway including HO(2)(ads) formation is the main pathway on Pt(111), and that the rate determining step (RDS) is the removal step of O(ads) on Pt(111). This RDS is accelerated on several highly active Pt alloys and core-shell surfaces, and this acceleration decreases the reaction intermediate O(ads). The increase in the partial pressure of O(2)(g) increases the surface coverage with O(ads) and OH(ads), and this coverage increase reduces the apparent reaction order with respect to the partial pressure to less than unity. This model shows details on how the reaction pathway, RDS, surface coverages, Tafel slope, reaction order and material-dependent activity are interrelated.

  1. Catalysis by metallic nanoparticles in aqueous solution: model reactions.

    PubMed

    Hervés, Pablo; Pérez-Lorenzo, Moisés; Liz-Marzán, Luis M; Dzubiella, Joachim; Lu, Yan; Ballauff, Matthias

    2012-09-07

    Catalysis by metallic nanoparticles is certainly among the most intensely studied problems in modern nanoscience. However, reliable tests for catalytic performance of such nanoparticles are often poorly defined, which makes comparison and benchmarking rather difficult. We tackle in this tutorial review a subset of well-studied reactions that take place in aqueous phase and for which a comprehensive kinetic analysis is available. Two of these catalytic model reactions are under consideration here, namely the reduction of (i) p-nitrophenol and (ii) hexacyanoferrate (iii), both by borohydride ions. Both reactions take place at the surface of noble metal nanoparticles at room temperature and can be accurately monitored by UV-vis spectroscopy. Moreover, the total surface area of the nanoparticles in solution can be known with high precision and thus can be directly used for the kinetic analysis. Hence, these model reactions represent cases of heterogeneous catalysis that can be modelled with the accuracy typically available for homogeneous catalysis. Both model reactions allow us to discuss a number of important concepts and questions, namely the dependence of catalytic activity on the size of the nanoparticles, electrochemistry of nanoparticles, surface restructuring, the use of carrier systems and the role of diffusion control.

  2. Stability Analysis of a Model for Foreign Body Fibrotic Reactions

    PubMed Central

    Ibraguimov, A.; Owens, L.; Su, J.; Tang, L.

    2012-01-01

    Implanted medical devices often trigger immunological and inflammatory reactions from surrounding tissues. The foreign body-mediated tissue responses may result in varying degrees of fibrotic tissue formation. There is an intensive research interest in the area of wound healing modeling, and quantitative methods are proposed to systematically study the behavior of this complex system of multiple cells, proteins, and enzymes. This paper introduces a kinetics-based model for analyzing reactions of various cells/proteins and biochemical processes as well as their transient behavior during the implant healing in 2-dimensional space. In particular, we provide a detailed modeling study of different roles of macrophages (MΦ) and their effects on fibrotic reactions. The main mathematical result indicates that the stability of the inflamed steady state depends primarily on the reaction dynamics of the system. However, if the said equilibrium is unstable by its reaction-only system, the spatial diffusion and chemotactic effects can help to stabilize when the model is dominated by classical and regulatory macrophages over the inflammatory macrophages. The mathematical proof and counter examples are given for these conclusions. PMID:23193430

  3. Model studies of heterogeneous catalytic hydrogenation reactions with gold.

    PubMed

    Pan, Ming; Brush, Adrian J; Pozun, Zachary D; Ham, Hyung Chul; Yu, Wen-Yueh; Henkelman, Graeme; Hwang, Gyeong S; Mullins, C Buddie

    2013-06-21

    Supported gold nanoparticles have recently been shown to possess intriguing catalytic activity for hydrogenation reactions, particularly for selective hydrogenation reactions. However, fundamental studies that can provide insight into the reaction mechanisms responsible for this activity have been largely lacking. In this tutorial review, we highlight several recent model experiments and theoretical calculations on a well-structured gold surface that provide some insights. In addition to the behavior of hydrogen on a model gold surface, we review the reactivity of hydrogen on a model gold surface in regards to NO2 reduction, chemoselective C=O bond hydrogenation, ether formation, and O-H bond dissociation in water and alcohols. Those studies indicate that atomic hydrogen has a weak interaction with gold surfaces which likely plays a key role in the unique hydrogenative chemistry of classical gold catalysts.

  4. Polymerase chain reaction-mediated gene synthesis: synthesis of a gene coding for isozyme c of horseradish peroxidase.

    PubMed

    Jayaraman, K; Fingar, S A; Shah, J; Fyles, J

    1991-05-15

    The synthesis of a gene coding for horseradish peroxidase (HRP, isozyme c; EC 1.11.1.7) is described using a polymerase chain reaction (PCR)-mediated gene synthesis approach developed in our laboratory. In this approach, all the oligonucleotides making up the gene are ligated in a single step by using the two outer oligonucleotides as PCR primers and the crude ligation mixture as the target. The PCR facilitates synthesis and purification of the gene simultaneously. The gene for HRP was synthesized by ligating all 40 oligonucleotides in a single step followed by PCR amplification. The gene was also synthesized from its fragments by using an overlap extension method similar to the procedure as described [Horton, R. M., Hunt, H. D., Ho, S. N., Pullen, J. K. & Pease, L. R. (1989) Gene 77, 61-68]. A method for combining different DNA fragments, in-frame, by using the PCR was also developed and used to synthesize the HRP gene from its gene fragments. This method is applicable to the synthesis of even larger genes and to combine any DNA fragments in-frame. After the synthesis, preliminary characterization of the HRP gene was also carried out by the PCR to confirm the arrangement of oligonucleotides in the gene. This was done by carrying out the PCR with several sets of primers along the gene and comparing the product sizes with the expected sizes. The gene and the fragments generated by PCR were cloned in Escherichia coli and the sequence was confirmed by manual and automated DNA sequencing.

  5. Polymerase chain reaction-mediated gene synthesis: synthesis of a gene coding for isozyme c of horseradish peroxidase.

    PubMed Central

    Jayaraman, K; Fingar, S A; Shah, J; Fyles, J

    1991-01-01

    The synthesis of a gene coding for horseradish peroxidase (HRP, isozyme c; EC 1.11.1.7) is described using a polymerase chain reaction (PCR)-mediated gene synthesis approach developed in our laboratory. In this approach, all the oligonucleotides making up the gene are ligated in a single step by using the two outer oligonucleotides as PCR primers and the crude ligation mixture as the target. The PCR facilitates synthesis and purification of the gene simultaneously. The gene for HRP was synthesized by ligating all 40 oligonucleotides in a single step followed by PCR amplification. The gene was also synthesized from its fragments by using an overlap extension method similar to the procedure as described [Horton, R. M., Hunt, H. D., Ho, S. N., Pullen, J. K. & Pease, L. R. (1989) Gene 77, 61-68]. A method for combining different DNA fragments, in-frame, by using the PCR was also developed and used to synthesize the HRP gene from its gene fragments. This method is applicable to the synthesis of even larger genes and to combine any DNA fragments in-frame. After the synthesis, preliminary characterization of the HRP gene was also carried out by the PCR to confirm the arrangement of oligonucleotides in the gene. This was done by carrying out the PCR with several sets of primers along the gene and comparing the product sizes with the expected sizes. The gene and the fragments generated by PCR were cloned in Escherichia coli and the sequence was confirmed by manual and automated DNA sequencing. Images PMID:1851991

  6. Synthesis of superheavy elements: Uncertainty analysis to improve the predictive power of reaction models

    NASA Astrophysics Data System (ADS)

    Lü, Hongliang; Boilley, David; Abe, Yasuhisa; Shen, Caiwan

    2016-09-01

    Background: Synthesis of superheavy elements is performed by heavy-ion fusion-evaporation reactions. However, fusion is known to be hindered with respect to what can be observed with lighter ions. Thus some delicate ambiguities remain on the fusion mechanism that eventually lead to severe discrepancies in the calculated formation probabilities coming from different fusion models. Purpose: In the present work, we propose a general framework based upon uncertainty analysis in the hope of constraining fusion models. Method: To quantify uncertainty associated with the formation probability, we propose to propagate uncertainties in data and parameters using the Monte Carlo method in combination with a cascade code called kewpie2, with the aim of determining the associated uncertainty, namely the 95 % confidence interval. We also investigate the impact of different models or options, which cannot be modeled by continuous probability distributions, on the final results. An illustrative example is presented in detail and then a systematic study is carried out for a selected set of cold-fusion reactions. Results: It is rigorously shown that, at the 95 % confidence level, the total uncertainty of the empirical formation probability appears comparable to the discrepancy between calculated values. Conclusions: The results obtained from the present study provide direct evidence for predictive limitations of the existing fusion-evaporation models. It is thus necessary to find other ways to assess such models for the purpose of establishing a more reliable reaction theory, which is expected to guide future experiments on the production of superheavy elements.

  7. Implementation of a vibrationally linked chemical reaction model for DSMC

    NASA Technical Reports Server (NTRS)

    Carlson, A. B.; Bird, Graeme A.

    1994-01-01

    A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.

  8. The fundamental diagram of pedestrian model with slow reaction

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Qin, Zheng; Hu, Hao; Xu, Zhaohui; Li, Huan

    2012-12-01

    The slow-to-start models are a classical cellular automata model in simulating vehicle traffic. However, to our knowledge, the slow-to-start effect has not been considered in modeling pedestrian dynamics. We verify the similar behavior between pedestrian and vehicle, and propose an new lattice gas (LG) model called the slow reaction (SR) model to describe the pedestrian’s delayed reaction in single-file movement. We simulate and reproduce Seyfried’s field experiments at the Research Centre Jülich, and use its empirical data to validate our SR model. We compare the SR model with the standard LG model. We tested different probabilities of slow reaction ps in the SR model and found the simulation data of ps=0.3 fit the empirical data best. The RMS error of the mean velocity of the SR model is smaller than that of the standard LG model. In the range of ps=0.1-0.3, our fundamental diagram between velocity and density by simulation coincides with field experiments. The distribution of individual velocity in the fundamental diagram in the SR model agrees with the empirical data better than that of the standard LG model. In addition, we observe stop-and-go waves and phase separation in pedestrian flow by simulation. We reproduced the phenomena of uneven distribution of interspaces by the SR model while the standard LG model did not. The SR model can reproduce the evolution of spatio-temporal structures of pedestrian flow with higher fidelity to Seyfried’s experiments than the standard LG model.

  9. An Advanced simulation Code for Modeling Inductive Output Tubes

    SciTech Connect

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  10. Modeling of Ionization Physics with the PIC Code OSIRIS

    SciTech Connect

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  11. Comparison of DSMC Reaction Models with QCT Reaction Rates for Nitrogen

    DTIC Science & Technology

    2016-07-17

    Distribution A: Approved for Public Release, Distribution Unlimited PA #16299 Non-Equilibrium Reaction Rates N2-N2, Tv≠ Tt =Tr • TCE, QK, and VFD are over an...Release, Distribution Unlimited PA #16299 Non-Equilibrium Rates for Bias Model N2-N2, Tv≠ Tt =Tr • Bias model provides much better fit than Park 2-T model...Higher vibrational favoring for lower Tt : λ>2 works better than λ=4 for Tt >20,000 K Distribution A: Approved for Public Release, Distribution

  12. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.

  13. Field-based tests of geochemical modeling codes using New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.

  14. Documentation for grants equal to tax model: Volume 3, Source code

    SciTech Connect

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations.

  15. Modeling pore collapse and chemical reactions in shock-loaded HMX crystals

    NASA Astrophysics Data System (ADS)

    Austin, Ryan; Barton, Nathan; Howard, William; Fried, Laurence

    2013-06-01

    The collapse of micron-sized pores in crystalline high explosives is the primary route to initiating thermal decomposition reactions under shock wave loading. Given the difficulty of resolving such processes in experiments, it is useful to study pore collapse using numerical simulation. A significant challenge that is encountered in such calculations is accounting for anisotropic mechanical responses and the effects of highly exothermic chemical reactions. In this work, we focus on simulating the shock-wave-induced collapse of a single pore in crystalline HMX using a multiphysics finite element code (ALE3D). The constitutive model set includes a crystal-mechanics-based model of thermoelasto-viscoplasticity and a single-step decomposition reaction with empirically determined kinetics. The model is exercised for shock stresses up to ~10 GPa to study the localization of energy about the collapsing pore and the early stages of reaction initiation. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-ABS-618941).

  16. Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2

    NASA Technical Reports Server (NTRS)

    Daley, P. L.; Owens, S. F.

    1988-01-01

    A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.

  17. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

    SciTech Connect

    Winters, W.S.; Evans, G.H.; Moen, C.D.

    1996-10-01

    This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

  18. A chain reaction approach to modelling gene pathways

    PubMed Central

    Cheng, Gary C.; Chen, Dung-Tsa; Chen, James J.; Soong, Seng-jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-01-01

    Background Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. Results In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By

  19. A chain reaction approach to modelling gene pathways.

    PubMed

    Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-08-01

    BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By

  20. Modeling human behaviors and reactions under dangerous environment.

    PubMed

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

  1. Mathematical properties of models of the reaction-diffusion type

    NASA Astrophysics Data System (ADS)

    Beccaria, M.; Soliani, G.

    Nonlinear systems of the reaction-diffusion (RD) type, including Gierer-Meinhardt models of autocatalysis, are studied using Lie algebras coming from their prolongation structure. Depending on the form of the functions of the fields characterizing the reactions among them, we consider both quadratic and cubic RD equations. On the basis of the prolongation algebra associated with a given RD model, we distinguish the model as a completely linearizable or a partially linearizable system. In this classification a crucial role is played by the relative sign of the diffusion coefficients, which strongly influence the properties of the system. In correspondence to the above situations, different algebraic characterizations, together with exact and approximate solutions, are found. Interesting examples are the quadratic RD model, which admits an exact solution in terms of the elliptic Weierstrass function, and the cubic Gierer-Meinhardt model, whose prolongation algebra leads to the similitude group in the plane.

  2. Numerical modelling of spallation in 2D hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Maw, J. R.; Giles, A. R.

    1996-05-01

    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  3. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.

  4. Transport-reaction model for defect and carrier behavior within displacement cascades in gallium arsenide

    SciTech Connect

    Wampler, William R.; Myers, Samuel M.

    2014-02-01

    A model is presented for recombination of charge carriers at displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers and defects within a representative spherically symmetric cluster. The initial radial defect profiles within the cluster were chosen through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Charging of the defects can produce high electric fields within the cluster which may influence transport and reaction of carriers and defects, and which may enhance carrier recombination through band-to-trap tunneling. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to pulsed neutron irradiation.

  5. Modelling population growth with delayed nonlocal reaction in 2-dimensions.

    PubMed

    Liang, Dong; Wu, Jianhong; Zhang, Fan

    2005-01-01

    In this paper, we consider the population growth of a single species living in a two-dimensional spatial domain. New reaction-difusion equation models with delayed nonlocal reaction are developed in two-dimensional bounded domains combining diferent boundary conditions. The important feature of the models is the reflection of the joint efect of the difusion dynamics and the nonlocal maturation delayed efect. We consider and ana- lyze numerical solutions of the mature population dynamics with some wellknown birth functions. In particular, we observe and study the occurrences of asymptotically stable steady state solutions and periodic waves for the two-dimensional problems with nonlocal delayed reaction. We also investigate numerically the efects of various parameters on the period, the peak and the shape of the periodic wave as well as the shape of the asymptotically stable steady state solution.

  6. Including Rebinding Reactions in Well-Mixed Models of Distributive Biochemical Reactions.

    PubMed

    Lawley, Sean D; Keener, James P

    2016-11-15

    The behavior of biochemical reactions requiring repeated enzymatic substrate modification depends critically on whether the enzymes act processively or distributively. Whereas processive enzymes bind only once to a substrate before carrying out a sequence of modifications, distributive enzymes release the substrate after each modification and thus require repeated bindings. Recent experimental and computational studies have revealed that distributive enzymes can act processively due to rapid rebindings (so-called quasi-processivity). In this study, we derive an analytical estimate of the probability of rapid rebinding and show that well-mixed ordinary differential equation models can use this probability to quantitatively replicate the behavior of spatial models. Importantly, rebinding requires that connections be added to the well-mixed reaction network; merely modifying rate constants is insufficient. We then use these well-mixed models to suggest experiments to 1) detect quasi-processivity and 2) test the theory. Finally, we show that rapid rebindings drastically alter the reaction's Michaelis-Menten rate equations.

  7. A predictive coding account of bistable perception - a model-based fMRI study.

    PubMed

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  8. Publicly Available Numerical Codes for Modeling the X-ray and Microwave Emissions from Solar and Stellar Activity

    NASA Technical Reports Server (NTRS)

    Holman, Gordon D.; Mariska, John T.; McTiernan, James M.; Ofman, Leon; Petrosian, Vahe; Ramaty, Reuven; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We have posted numerical codes on the Web for modeling the bremsstrahlung x-ray emission and the a gyrosynchrotron radio emission from solar and stellar activity. In addition to radiation codes, steady-state and time-dependent Fokker-Planck codes are provided for computing the distribution and evolution of accelerated electrons. A 1-D hydrodynamics code computes the response of the stellar atmosphere (chromospheric evaporation). A code for modeling gamma-ray line spectra is also available. On-line documentation is provided for each code. These codes have been developed for modeling results from the High Energy Solar Spectroscopic Imager (HESSI) along related microwave observations of solar flares. Comprehensive codes for modeling images and spectra of solar flares are under development. The posted codes can be obtained on NASA/Goddard's HESSI Web Site at http://hesperia.gsfc.nasa.gov/hessi/modelware.htm. This work is supported in part by the NASA Sun-Earth Connection Program.

  9. Publicly Available Numerical Codes for Modeling the X-ray and Microwave Emissions from Solar and Stellar Activity

    NASA Technical Reports Server (NTRS)

    Holman, Gordon D.; Mariska, John T.; McTiernan, James M.; Ofman, Leon; Petrosian, Vahe; Ramaty, Reuven; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We have posted numerical codes on the Web for modeling the bremsstrahlung x-ray emission and the a gyrosynchrotron radio emission from solar and stellar activity. In addition to radiation codes, steady-state and time-dependent Fokker-Planck codes are provided for computing the distribution and evolution of accelerated electrons. A 1-D hydrodynamics code computes the response of the stellar atmosphere (chromospheric evaporation). A code for modeling gamma-ray line spectra is also available. On-line documentation is provided for each code. These codes have been developed for modeling results from the High Energy Solar Spectroscopic Imager (HESSI) along related microwave observations of solar flares. Comprehensive codes for modeling images and spectra of solar flares are under development. The posted codes can be obtained on NASA/Goddard's HESSI Web Site at http://hesperia.gsfc.nasa.gov/hessi/modelware.htm. This work is supported in part by the NASA Sun-Earth Connection Program.

  10. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  11. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  12. LWR codes capability to address SFR BDBA scenarios: Modeling of the ABCOVE tests

    SciTech Connect

    Herranz, L. E.; Garcia, M.; Morandi, S.

    2012-07-01

    The sound background built-up in LWR source term analysis in case of a severe accident, make it worth to check the capability of LWR safety analysis codes to model accident SFR scenarios, at least in some areas. This paper gives a snapshot of such predictability in the area of aerosol behavior in containment. To do so, the AB-5 test of the ABCOVE program has been modeled with 3 LWR codes: ASTEC, ECART and MELCOR. Through the search of a best estimate scenario and its comparison to data, it is concluded that even in the specific case of in-containment aerosol behavior, some enhancements would be needed in the LWR codes and/or their application, particularly with respect to consideration of particle shape. Nonetheless, much of the modeling presently embodied in LWR codes might be applicable to SFR scenarios. These conclusions should be seen as preliminary as long as comparisons are not extended to more experimental scenarios. (authors)

  13. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  14. Parallel Spectral Transform Shallow Water Model: A runtime-tunable parallel benchmark code

    SciTech Connect

    Worley, P.H.; Foster, I.T.

    1994-05-01

    Fairness is an important issue when benchmarking parallel computers using application codes. The best parallel algorithm on one platform may not be the best on another. While it is not feasible to reevaluate parallel algorithms and reimplement large codes whenever new machines become available, it is possible to embed algorithmic options into codes that allow them to be ``tuned`` for a paticular machine without requiring code modifications. In this paper, we describe a code in which such an approach was taken. PSTSWM was developed for evaluating parallel algorithms for the spectral transform method in atmospheric circulation models. Many levels of runtime-selectable algorithmic options are supported. We discuss these options and our evaluation methodology. We also provide empirical results from a number of parallel machines, indicating the importance of tuning for each platform before making a comparison.

  15. Improved carbon migration modelling with the ERO code

    NASA Astrophysics Data System (ADS)

    Van Hoey, Olivier; Kirschner, Andreas; Björkas, Carolina; Borodin, Dmitry; Matveev, Dmitry; Uytdenhouwen, Inge; Van Oost, Guido

    2013-07-01

    Material migration is a crucial issue in thermonuclear fusion devices. To study carbon migration, 13CH4 has been injected through a polished graphite roof-like test limiter in the TEXTOR scrape-off layer. The interpretation of the experimental 13C deposition patterns on the roof limiter surface has been done with the ERO impurity transport code. To reproduce the very low experimental 13C deposition efficiencies with ERO, an enhanced re-erosion mechanism for re-deposited carbon had to be assumed in previous studies. However, erosion by hydrogenic species produced during dissociation of injected 13CH4 was not taken into account by ERO in these studies. This additional erosion could maybe explain the very low experimental 13C deposition efficiencies. Therefore, it is now taken into account in ERO. Also more realistic physical sputtering yields and hydrocarbon reflection probabilities have been implemented in ERO. The simulations with these improvements included clearly confirm the need for enhanced re-erosion of re-deposited carbon.

  16. Recommended requirements to code officials for solar heating, cooling, and hot water systems. Model document for code officials on solar heating and cooling of buildings

    SciTech Connect

    1980-06-01

    These recommended requirements include provisions for electrical, building, mechanical, and plumbing installations for active and passive solar energy systems used for space or process heating and cooling, and domestic water heating. The provisions in these recommended requirements are intended to be used in conjunction with the existing building codes in each jurisdiction. Where a solar relevant provision is adequately covered in an existing model code, the section is referenced in the Appendix. Where a provision has been drafted because there is no counterpart in the existing model code, it is found in the body of these recommended requirements. Commentaries are included in the text explaining the coverage and intent of present model code requirements and suggesting alternatives that may, at the discretion of the building official, be considered as providing reasonable protection to the public health and safety. Also included is an Appendix which is divided into a model code cross reference section and a reference standards section. The model code cross references are a compilation of the sections in the text and their equivalent requirements in the applicable model codes. (MHR)

  17. Modeling the Reaction of Fe Atoms with CCl4

    SciTech Connect

    Camaioni, Donald M.; Ginovska, Bojana; Dupuis, Michel

    2009-01-05

    The reaction of zero-valent iron with carbon tetrachloride (CCl4) in gas phase was studied using density functional theory. Temperature programmed desorption experiments over a range of Fe and CCl4 coverages on a FeO(111) surface, demonstrate a rich surface chemistry with several reaction products (C2Cl4, C2Cl6, OCCl2, CO, FeCl2, FeCl3) observed. The reactivity of Fe and CCl4 was studied under three stoichiometries, one Fe with one CCl4, one Fe with two CCl4 molecules and two Fe with one CCl4, modeling the environment of the experimental work. The electronic structure calculations give insight into the reactions leading to the experimentally observed products and suggest that novel Fe-C-Cl containing species are important intermediates in these reactions. The intermediate complexes are formed in highly exothermic reactions, in agreement with the experimentally observed reactivity with the surface at low temperature (30 K). This initial survey of the reactivity of Fe with CCl4 identifies some potential reaction pathways that are important in the effort to use Fe nano-particles to differentiate harmful pathways that lead to the formation of contaminants like chloroform (CHCl3) from harmless pathways that lead to products such as formate (HCO2-) or carbon oxides in water and soil. The Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.

  18. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  19. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  20. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    SciTech Connect

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC.

  1. A computer code for calculations in the algebraic collective model of the atomic nucleus

    NASA Astrophysics Data System (ADS)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  2. A predictive coding account of bistable perception - a model-based fMRI study

    PubMed Central

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido

    2017-01-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model’s predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants’ perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current

  3. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  4. Anisotropic Resistivity Forward Modelling Using Automatic Generated Higher-order Finite Element Codes

    NASA Astrophysics Data System (ADS)

    Wang, W.; Liu, J.

    2016-12-01

    Forward modelling is the general way to obtain responses of geoelectrical structures. Field investigators might find it useful for planning surveys and choosing optimal electrode configurations with respect to their targets. During the past few decades much effort has been put into the development of numerical forward codes, such as integral equation method, finite difference method and finite element method. Nowadays, most researchers prefer the finite element method (FEM) for its flexible meshing scheme, which can handle models with complex geometry. Resistivity Modelling with commercial sofewares such as ANSYS and COMSOL is convenient, but like working with a black box. Modifying the existed codes or developing new codes is somehow a long period. We present a new way to obtain resistivity forward modelling codes quickly, which is based on the commercial sofeware FEPG (Finite element Program Generator). Just with several demanding scripts, FEPG could generate FORTRAN program framework which can easily be altered to adjust our targets. By supposing the electric potential is quadratic in each element of a two-layer model, we obtain quite accurate results with errors less than 1%, while more than 5% errors could appear by linear FE codes. The anisotropic half-space model is supposed to concern vertical distributed fractures. The measured apparent resistivities along the fractures are bigger than results from its orthogonal direction, which are opposite of the true resistivities. Interpretation could be misunderstood if this anisotropic paradox is ignored. The technique we used can obtain scientific codes in a short time. The generated powerful FORTRAN codes could reach accurate results by higher-order assumption and can handle anisotropy to make better interpretations. The method we used could be expand easily to other domain where FE codes are needed.

  5. Examining the role of finite reaction times in swarming models

    NASA Astrophysics Data System (ADS)

    Copenhagen, Katherine; Quint, David; Gopinathan, Ajay

    2015-03-01

    Modeling collective behavior in biological and artificial systems has had much success in recent years at predicting and mimicing real systems by utilizing techniques borrowed from modelling many particle systems interacting with physical forces. However unlike inert particles interacting with instantaneous forces, living organisms have finite reaction times, and behaviors that vary from individual to individual. What constraints do these physiological effects place on the interactions between individuals in order to sustain a robust ordered state? We use a self-propelled agent based model in continuous space based on previous models by Vicsek and Couzin including alignment and separation maintaining interactions to examine the behavior of a single cohesive group of organisms. We found that for very short reaction times the system is able to form an ordered state even in the presence of heterogeneities. However for larger more physiological reaction times organisms need a buffer zone with no cohesive interactions in order to maintain an ordered state. Finally swarms with finite reaction times and behavioral heterogeneities are able to dynamically sort out individuals with impaired function and sustain order.

  6. Transactional Model of Coping, Appraisals, and Emotional Reactions to Stress.

    ERIC Educational Resources Information Center

    Brack, Greg; McCarthy, Christopher J.

    A study investigated the relationship of transactional models of stress management and appraisal-emotion relationships to emotions produced by taking a new job. The participants, 231 graduate students, completed measures of cognitive appraisals, stress coping resources, and emotional reactions at the time of taking a new job and some time later.…

  7. A Multiple Reaction Modelling Framework for Microbial Electrochemical Technologies.

    PubMed

    Oyetunde, Tolutola; Sarma, Priyangshu M; Ahmad, Farrukh; Rodríguez, Jorge

    2017-01-04

    A mathematical model for the theoretical evaluation of microbial electrochemical technologies (METs) is presented that incorporates a detailed physico-chemical framework, includes multiple reactions (both at the electrodes and in the bulk phase) and involves a variety of microbial functional groups. The model is applied to two theoretical case studies: (i) A microbial electrolysis cell (MEC) for continuous anodic volatile fatty acids (VFA) oxidation and cathodic VFA reduction to alcohols, for which the theoretical system response to changes in applied voltage and VFA feed ratio (anode-to-cathode) as well as membrane type are investigated. This case involves multiple parallel electrode reactions in both anode and cathode compartments; (ii) A microbial fuel cell (MFC) for cathodic perchlorate reduction, in which the theoretical impact of feed flow rates and concentrations on the overall system performance are investigated. This case involves multiple electrode reactions in series in the cathode compartment. The model structure captures interactions between important system variables based on first principles and provides a platform for the dynamic description of METs involving electrode reactions both in parallel and in series and in both MFC and MEC configurations. Such a theoretical modelling approach, largely based on first principles, appears promising in the development and testing of MET control and optimization strategies.

  8. A Multiple Reaction Modelling Framework for Microbial Electrochemical Technologies

    PubMed Central

    Oyetunde, Tolutola; Sarma, Priyangshu M.; Ahmad, Farrukh; Rodríguez, Jorge

    2017-01-01

    A mathematical model for the theoretical evaluation of microbial electrochemical technologies (METs) is presented that incorporates a detailed physico-chemical framework, includes multiple reactions (both at the electrodes and in the bulk phase) and involves a variety of microbial functional groups. The model is applied to two theoretical case studies: (i) A microbial electrolysis cell (MEC) for continuous anodic volatile fatty acids (VFA) oxidation and cathodic VFA reduction to alcohols, for which the theoretical system response to changes in applied voltage and VFA feed ratio (anode-to-cathode) as well as membrane type are investigated. This case involves multiple parallel electrode reactions in both anode and cathode compartments; (ii) A microbial fuel cell (MFC) for cathodic perchlorate reduction, in which the theoretical impact of feed flow rates and concentrations on the overall system performance are investigated. This case involves multiple electrode reactions in series in the cathode compartment. The model structure captures interactions between important system variables based on first principles and provides a platform for the dynamic description of METs involving electrode reactions both in parallel and in series and in both MFC and MEC configurations. Such a theoretical modelling approach, largely based on first principles, appears promising in the development and testing of MET control and optimization strategies. PMID:28054959

  9. Renormalized reaction and relaxation rates for harmonic oscillator model

    NASA Astrophysics Data System (ADS)

    Gorbachev, Yuriy E.

    2017-07-01

    The thermal dissociation process is considered within the method of solving the kinetic equations for spatially inhomogeneous reactive gas mixtures developed in the previous papers. For harmonic oscillator model explicit expressions for reaction and relaxation rates in the renormalized form are derived.

  10. Numerical modeling of humic colloid borne americium (III) migration in column experiments using the transport/speciation code K1D and the KICAM model.

    PubMed

    Schüssler, W; Artinger, R; Kim, J I; Bryan, N D; Griffin, D

    2001-02-01

    The humic colloid borne Am(III) transport was investigated in column experiments for Gorleben groundwater/sand systems. It was found that the interaction of Am with humic colloids is kinetically controlled, which strongly influences the migration behavior of Am(III). These kinetic effects have to be taken into account for transport/speciation modeling. The kinetically controlled availability model (KICAM) was developed to describe actinide sorption and transport in laboratory batch and column experiments. Application of the KICAM requires a chemical transport/speciation code, which simultaneously models both kinetically controlled processes and equilibrium reactions. Therefore, the code K1D was developed as a flexible research code that allows the inclusion of kinetic data in addition to transport features and chemical equilibrium. This paper presents the verification of K1D and its application to model column experiments investigating unimpeded humic colloid borne Am migration. Parmeters for reactive transport simulations were determined for a Gorleben groundwater system of high humic colloid concentration (GoHy 2227). A single set of parameters was used to model a series of column experiments. Model results correspond well to experimental data for the unretarded humic borne Am breakthrough.

  11. Numerical modeling of humic colloid borne Americium (III) migration in column experiments using the transport/speciation code K1D and the KICAM model

    NASA Astrophysics Data System (ADS)

    Schüßler, W.; Artinger, R.; Kim, J. I.; Bryan, N. D.; Griffin, D.

    2001-02-01

    The humic colloid borne Am(III) transport was investigated in column experiments for Gorleben groundwater/sand systems. It was found that the interaction of Am with humic colloids is kinetically controlled, which strongly influences the migration behavior of Am(III). These kinetic effects have to be taken into account for transport/speciation modeling. The kinetically controlled availability model (KICAM) was developed to describe actinide sorption and transport in laboratory batch and column experiments. Application of the KICAM requires a chemical transport/speciation code, which simultaneously models both kinetically controlled processes and equilibrium reactions. Therefore, the code K1D was developed as a flexible research code that allows the inclusion of kinetic data in addition to transport features and chemical equilibrium. This paper presents the verification of K1D and its application to model column experiments investigating unimpeded humic colloid borne Am migration. Parameters for reactive transport simulations were determined for a Gorleben groundwater system of high humic colloid concentration (GoHy 2227). A single set of parameters was used to model a series of column experiments. Model results correspond well to experimental data for the unretarded humic borne Am breakthrough.

  12. Experimental study and nuclear model calculations of 3He-induced nuclear reactions on zinc

    NASA Astrophysics Data System (ADS)

    Al-Abyad, M.; Mohamed, Gehan Y.; Ditrói, F.; Takács, S.; Tárkányi, F.

    2017-05-01

    Excitation functions of 3He -induced nuclear reactions on natural zinc were measured using the standard stacked-foil technique and high-resolution gamma-ray spectrometry. From their threshold energies up to 27MeV, the cross-sections for natZn (3He, xn) 69Ge, natZn(3He, xnp) 66,67,68Ga, and natZn(3He, x)62,65Zn reactions were measured. The nuclear model codes TALYS-1.6, EMPIRE-3.2 and ALICE-IPPE were used to describe the formation of these products. The present data were compared with the theoretical results and with the available experimental data. Integral yields for some important radioisotopes were determined.

  13. Modeling of transmittance degradation caused by optical surface contamination by atomic oxygen reaction with adsorbed silicones

    NASA Astrophysics Data System (ADS)

    Snyder, Aaron; Banks, Bruce A.; Miller, Sharon K.; Stueber, Thomas; Sechkar, Edward

    2000-09-01

    A numerical procedure is presented to calculate transmittance degradation caused by contaminant films on spacecraft surfaces produced through the interaction of orbital atomic oxygen (AO) with volatile silicones and hydrocarbons from spacecraft components. In the model, contaminant accretion is dependent on the adsorption of species, depletion reactions due to gas-surface collisions, desorption, and surface reactions between AO and silicon producing SiOx (where x is near 2). A detailed description of the procedure used to calculate the constituents of the contaminant layer is presented, including the equations that govern the evolution of fractional coverage by specie type. As an illustrative example of film growth, calculation results using a prototype code that calculates the evolution of surface coverage by specie type is presented and discussed. An example of the transmittance degradation caused by surface interaction of AO with deposited contaminant is presented for the case of exponentially decaying contaminant flux. These examples are performed using hypothetical values for the process parameters.

  14. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  15. UCODE, a computer code for universal inverse modeling

    NASA Astrophysics Data System (ADS)

    Poeter, Eileen P.; Hill, Mary C.

    1999-05-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  16. Dense Coding in a Two-Spin Squeezing Model with Intrinsic Decoherence

    NASA Astrophysics Data System (ADS)

    Zhang, Bing-Bing; Yang, Guo-Hui

    2016-11-01

    Quantum dense coding in a two-spin squeezing model under intrinsic decoherence with different initial states (Werner state and Bell state) is investigated. It shows that dense coding capacity χ oscillates with time and finally reaches different stable values. χ can be enhanced by decreasing the magnetic field Ω and the intrinsic decoherence γ or increasing the squeezing interaction μ, moreover, one can obtain a valid dense coding capacity ( χ satisfies χ > 1) by modulating these parameters. The stable value of χ reveals that the decoherence cannot entirely destroy the dense coding capacity. In addition, decreasing Ω or increasing μ can not only enhance the stable value of χ but also impair the effects of decoherence. As the initial state is the Werner state, the purity r of initial state plays a key role in adjusting the value of dense coding capacity, χ can be significantly increased by improving the purity of initial state. For the initial state is Bell state, the large spin squeezing interaction compared with the magnetic field guarantees the optimal dense coding. One cannot always achieve a valid dense coding capacity for the Werner state, while for the Bell state, the dense coding capacity χ remains stuck at the range of greater than 1.

  17. Code modernization and modularization of APEX and SWAT watershed simulation models

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  18. Modeling the Dynamics of Chemical Reactions Involving Multidimensional Tunneling

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Ping

    The direct dynamics approach is employed to study prototype reactions including hydrogen and hydride transfer. The dynamics are treated with variational transition state theory including multidimensional semiclassical tunneling corrections, and the force field is modeled with semiempirical molecular orbital theory. The primary kinetic isotope effect for the (1,5) sigmatropic rearrangement reaction of cis-1,3-pentadiene is predicted and compared to experiment. The force field is obtained by molecular orbital theory with the AM1, PM3, and MINDO/3 parameterizations. The kinetic isotope effects calculated with the MINDO/3 and PM3 Hamiltonians agree with those calculated by AM1 within 13%, and the latter agree with experiment within 13%. The tunneling contributions to the kinetic isotope effects are analyzed, and the nature of the vibrationally assisted tunneling process is elucidated. The kinetic isotope effects of the reactions of CF_3 with CD_3H are studied including all internal degrees of freedom. The force field necessary for the dynamics calculations is evaluated using the neglect of diatomic differential overlap (NDDO) molecular orbital theory with semiempirical specific -reaction parameters (SRP), which are based on the standard AM1 parameterization adjusted to improve the agreement between experiment and the calculated quantities such as the vibrational frequencies of reactants and products and the classical barrier. The kinetic isotope effects are calculated using two different SRP force fields, and they are in good agreement with the experimental measurements. The picture of the corner cutting tunneling process that emerges is discussed graphically. The two NDDO-SRP models are further used to study the hydrogen abstraction reactions of CF_3 with CH_4, CD_4, and C_2 H_6, and very good agreement with experiment is obtained. Finally, a simple model hydride transfer reaction of formic acid is investigated usine the AM1 and PM3 Hamiltonians, and the results are

  19. Kinetic modelling of GlmU reactions - prioritization of reaction for therapeutic application.

    PubMed

    Singh, Vivek K; Das, Kaveri; Seshadri, Kothandaraman

    2012-01-01

    Mycobacterium tuberculosis(Mtu), a successful pathogen, has developed resistance against the existing anti-tubercular drugs necessitating discovery of drugs with novel action. Enzymes involved in peptidoglycan biosynthesis are attractive targets for antibacterial drug discovery. The bifunctional enzyme mycobacterial GlmU (Glucosamine 1-phosphate N-acetyltransferase/ N-acetylglucosamine-1-phosphate uridyltransferase) has been a target enzyme for drug discovery. Its C- and N- terminal domains catalyze acetyltransferase (rxn-1) and uridyltransferase (rxn-2) activities respectively and the final product is involved in peptidoglycan synthesis. However, the bifunctional nature of GlmU poses difficulty in deciding which function to be intervened for therapeutic advantage. Genetic analysis showed this as an essential gene but it is still unclear whether any one or both of the activities are critical for cell survival. Often enzymatic activity with suitable high-throughput assay is chosen for random screening, which may not be the appropriate biological function inhibited for maximal effect. Prediction of rate-limiting function by dynamic network analysis of reactions could be an option to identify the appropriate function. With a view to provide insights into biochemical assays with appropriate activity for inhibitor screening, kinetic modelling studies on GlmU were undertaken. Kinetic model of Mtu GlmU-catalyzed reactions was built based on the available kinetic data on Mtu and deduction from Escherichia coli data. Several model variants were constructed including coupled/decoupled, varying metabolite concentrations and presence/absence of product inhibitions. This study demonstrates that in coupled model at low metabolite concentrations, inhibition of either of the GlmU reactions cause significant decrement in the overall GlmU rate. However at higher metabolite concentrations, rxn-2 showed higher decrement. Moreover, with available intracellular concentration of the

  20. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    SciTech Connect

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  1. Comparing the line broadened quasilinear model to Vlasov code

    SciTech Connect

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  2. Engine structures modeling software system: Computer code. User's manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.

  3. Comparing the line broadened quasilinear model to Vlasov code

    NASA Astrophysics Data System (ADS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  4. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    SciTech Connect

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  5. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  6. Complex reaction noise in a molecular quasispecies model

    NASA Astrophysics Data System (ADS)

    Hochberg, David; Zorzano, María-Paz; Morán, Federico

    2006-05-01

    We have derived exact Langevin equations for a model of quasispecies dynamics. The inherent multiplicative reaction noise is complex and its statistical properties are specified completely. The numerical simulation of the complex Langevin equations is carried out using the Cholesky decomposition for the noise covariance matrix. This internal noise, which is due to diffusion-limited reactions, produces unavoidable spatio-temporal density fluctuations about the mean field value. In two dimensions, this noise strictly vanishes only in the perfectly mixed limit, a situation difficult to attain in practice.

  7. Once-through CANDU reactor models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; Bjerke, M.A.

    1980-11-01

    Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.

  8. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  9. New higher-order Godunov code for modelling performance of two-stage light gas guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Miller, R. J.

    1995-01-01

    A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

  10. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher.

  11. Code package {open_quotes}SVECHA{close_quotes}: Modeling of core degradation phenomena at severe accidents

    SciTech Connect

    Veshchunov, M.S.; Kisselev, A.E.; Palagin, A.V.

    1995-09-01

    The code package SVECHA for the modeling of in-vessel core degradation (CD) phenomena in severe accidents is being developed in the Nuclear Safety Institute, Russian Academy of Science (NSI RAS). The code package presents a detailed mechanistic description of the phenomenology of severe accidents in a reactor core. The modules of the package were developed and validated on separate effect test data. These modules were then successfully implemented in the ICARE2 code and validated against a wide range of integral tests. Validation results have shown good agreement with separate effect tests data and with the integral tests CORA-W1/W2, CORA-13, PHEBUS-B9+.

  12. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  13. Spatiotemporal patterns in a reaction-diffusion model with the Degn-Harrison reaction scheme

    NASA Astrophysics Data System (ADS)

    Peng, Rui; Yi, Feng-qi; Zhao, Xiao-qiang

    Spatial and temporal patterns generated in ecological and chemical systems have become a central object of research in recent decades. In this work, we are concerned with a reaction-diffusion model with the Degn-Harrison reaction scheme, which accounts for the qualitative feature of the respiratory process in a Klebsiella aerogenes bacterial culture. We study the global stability of the constant steady state, existence and nonexistence of nonconstant steady states as well as the Hopf and steady state bifurcations. In particular, our results show the existence of Turing patterns and inhomogeneous periodic oscillatory patterns while the system parameters are all spatially homogeneous. These results also exhibit the critical role of the system parameters in leading to the formation of spatiotemporal patterns.

  14. Summary of the models and methods for the FEHM application - a finite-element heat- and mass-transfer code

    SciTech Connect

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component.

  15. Implementation of a new model for gravitational collision cross sections in nuclear aerosol codes

    SciTech Connect

    Buckley, R.L.; Loyalka, S.K.

    1995-03-01

    Models currently used in aerosol source codes for the gravitational collision efficiency are deficient in not accounting fully for two particle hydrodynamics (interception and inertia), which becomes important for larger particles. A computer code that accounts for these effects in calculating particle trajectories is used to find values of efficiency for a range of particle sizes. Simple fits to these data as a function of large particle diameter for a given particle diameter ratio are then obtained using standard linear regression, and a new model is constructed. This model is then implemented into two computer codes. AEROMECH and CONTAIN, Version 1.2 For a test problem, concentration distributions obtained with the new model and the standard model for efficiency are found to be markedly different.

  16. Savannah River Laboratory DOSTOMAN code: a compartmental pathways computer model of contaminant transport

    SciTech Connect

    King, C M; Wilhite, E L; Root, Jr, R W; Fauth, D J; Routt, K R; Emslie, R H; Beckmeyer, R R; Fjeld, R A; Hutto, G A; Vandeven, J A

    1985-01-01

    The Savannah River Laboratory DOSTOMAN code has been used since 1978 for environmental pathway analysis of potential migration of radionuclides and hazardous chemicals. The DOSTOMAN work is reviewed including a summary of historical use of compartmental models, the mathematical basis for the DOSTOMAN code, examples of exact analytical solutions for simple matrices, methods for numerical solution of complex matrices, and mathematical validation/calibration of the SRL code. The review includes the methodology for application to nuclear and hazardous chemical waste disposal, examples of use of the model in contaminant transport and pathway analysis, a user's guide for computer implementation, peer review of the code, and use of DOSTOMAN at other Department of Energy sites. 22 refs., 3 figs.

  17. A model of a code of ethics for tissue banks operating in developing countries.

    PubMed

    Morales Pedraza, Jorge

    2012-12-01

    Ethical practice in the field of tissue banking requires the setting of principles, the identification of possible deviations and the establishment of mechanisms that will detect and hinder abuses that may occur during the procurement, processing and distribution of tissues for transplantation. This model of a Code of Ethics has been prepared with the purpose of being used for the elaboration of a Code of Ethics for tissue banks operating in the Latin American and the Caribbean, Asia and the Pacific and the African regions in order to guide the day-to-day operation of these banks. The purpose of this model of Code of Ethics is to assist interested tissue banks in the preparation of their own Code of Ethics towards ensuring that the tissue bank staff support with their actions the mission and values associated with tissue banking.

  18. User manual for ATILA, a finite-element code for modeling piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Decarpigny, Jean-Noel; Debus, Jean-Claude

    1987-09-01

    This manual for the user of the finite-element code ATILA provides instruction for entering information and running the code on a VAX computer. The manual does not include the code. The finite element code ATILA has been specifically developed to aid the design of piezoelectric devices, mainly for sonar applications. Thus, it is able to perform the model analyses of both axisymmetrical and fully three-dimensional piezoelectric transducers. It can also provide their harmonic response under radiating conditions: nearfield and farfield pressure, transmitting voltage response, directivity pattern, electrical impedance, as well as displacement field, nodal plane positions, stress field and various stress criteria...Its accuracy and its ability to describe the physical behavior of various transducers (Tonpilz transducers, double headmass symmetrical length expanders, free flooded rings, flextensional transducers, bender bars, cylindrical and trilaminar hydrophones...) have been checked by modelling more than twenty different structures and comparing numerical and experimental results.

  19. Description of codes and models to be used in risk assessment

    SciTech Connect

    Not Available

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab.

  20. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  1. Extension of the Liège intranuclear-cascade model to reactions induced by light nuclei

    NASA Astrophysics Data System (ADS)

    Mancusi, Davide; Boudard, Alain; Cugnon, Joseph; David, Jean-Christophe; Kaitaniemi, Pekka; Leray, Sylvie

    2014-11-01

    The purpose of this paper is twofold. First, we present the extension of the Liège intranuclear-cascade model to reactions induced by light ions. We describe here the ideas upon which we built our treatment of nucleus-nucleus reactions and we compare the model predictions against a vast set of heterogeneous experimental data. In spite of the discussed limitations of the intranuclear-cascade scheme, we find that our model yields valid predictions for a number of observables and positions itself as one of the most attractive alternatives available to geant4 users for the simulation of light-ion-induced reactions. Second, we describe the c++ version of the code, which is physicswise equivalent to the legacy version, is available in geant4, and will serve as the basis for all future development of the model.

  2. Fluid dynamic modeling of nano-thermite reactions

    SciTech Connect

    Martirosyan, Karen S.; Zyskin, Maxim; Jenkins, Charles M.; Horie, Yasuyuki

    2014-03-14

    This paper presents a direct numerical method based on gas dynamic equations to predict pressure evolution during the discharge of nanoenergetic materials. The direct numerical method provides for modeling reflections of the shock waves from the reactor walls that generates pressure-time fluctuations. The results of gas pressure prediction are consistent with the experimental evidence and estimates based on the self-similar solution. Artificial viscosity provides sufficient smoothing of shock wave discontinuity for the numerical procedure. The direct numerical method is more computationally demanding and flexible than self-similar solution, in particular it allows study of a shock wave in its early stage of reaction and allows the investigation of “slower” reactions, which may produce weaker shock waves. Moreover, numerical results indicate that peak pressure is not very sensitive to initial density and reaction time, providing that all the material reacts well before the shock wave arrives at the end of the reactor.

  3. Simple model for lambda-doublet propensities in bimolecular reactions

    NASA Technical Reports Server (NTRS)

    Bronikowski, Michael J.; Zare, Richard N.

    1990-01-01

    A simple geometric model is presented to account for lambda-doublet propensities in bimolecular reactions A + BC - AB + C. It applies to reactions in which AB is formed in a pi state, and in which the unpaired molecular orbital responsible for lambda-doubling arises from breaking the B-C bond. The lambda-doublet population ratio is predicted to be 2:1 provided that: (1) the motion of A in the transition state determines the plane of rotation of AB; (2) the unpaired pi orbital lying initially along the B-C bond may be resolved into a projection onto the AB plane of rotation and a projection perpendicular to this plane; (3) there is no preferred geometry for dissociation of ABC. The 2:1 lambda-doublet ratio is the 'unconstrained dynamics prior' lambda-doublet distribution for such reactions.

  4. Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples

    SciTech Connect

    Lorence, L.J. Jr.; Beutler, D.E.

    1997-09-01

    This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.

  5. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  6. Modelling biochemical reaction systems by stochastic differential equations with reflection.

    PubMed

    Niu, Yuanling; Burrage, Kevin; Chen, Luonan

    2016-05-07

    In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach.

  7. Classic and contemporary approaches to modeling biochemical reactions

    PubMed Central

    Chen, William W.; Niepel, Mario; Sorger, Peter K.

    2010-01-01

    Recent interest in modeling biochemical networks raises questions about the relationship between often complex mathematical models and familiar arithmetic concepts from classical enzymology, and also about connections between modeling and experimental data. This review addresses both topics by familiarizing readers with key concepts (and terminology) in the construction, validation, and application of deterministic biochemical models, with particular emphasis on a simple enzyme-catalyzed reaction. Networks of coupled ordinary differential equations (ODEs) are the natural language for describing enzyme kinetics in a mass action approximation. We illustrate this point by showing how the familiar Briggs-Haldane formulation of Michaelis-Menten kinetics derives from the outer (or quasi-steady-state) solution of a dynamical system of ODEs describing a simple reaction under special conditions. We discuss how parameters in the Michaelis-Menten approximation and in the underlying ODE network can be estimated from experimental data, with a special emphasis on the origins of uncertainty. Finally, we extrapolate from a simple reaction to complex models of multiprotein biochemical networks. The concepts described in this review, hitherto of interest primarily to practitioners, are likely to become important for a much broader community of cellular and molecular biologists attempting to understand the promise and challenges of “systems biology” as applied to biochemical mechanisms. PMID:20810646

  8. Modeling of tungsten transport in the linear plasma device PSI-2 with the 3D Monte-Carlo code ERO

    NASA Astrophysics Data System (ADS)

    Marenkov, E.; Eksaeva, A.; Borodin, D.; Kirschner, A.; Laengner, M.; Kurnaev, V.; Kreter, A.; Coenen, J. W.; Rasinski, M.

    2015-08-01

    The ERO code was modified for modeling of plasma-surface interactions and impurities transport in the PSI-2 installation. Results of experiments on tungsten target irradiation with argon plasma were taken as a benchmark for the new version of the code. Spectroscopy data modeled with the code are in good agreement with experimental ones. Main factors contributing to observed discrepancies are discussed.

  9. A model for reaction-assisted polymer dissolution in LIGA.

    SciTech Connect

    Larson, Richard S.

    2004-05-01

    A new chemically-oriented mathematical model for the development step of the LIGA process is presented. The key assumption is that the developer can react with the polymeric resist material in order to increase the solubility of the latter, thereby partially overcoming the need to reduce the polymer size. The ease with which this reaction takes place is assumed to be determined by the number of side chain scissions that occur during the x-ray exposure phase of the process. The dynamics of the dissolution process are simulated by solving the reaction-diffusion equations for this three-component, two-phase system, the three species being the unreacted and reacted polymers and the solvent. The mass fluxes are described by the multicomponent diffusion (Stefan-Maxwell) equations, and the chemical potentials are assumed to be given by the Flory-Huggins theory. Sample calculations are used to determine the dependence of the dissolution rate on key system parameters such as the reaction rate constant, polymer size, solid-phase diffusivity, and Flory-Huggins interaction parameters. A simple photochemistry model is used to relate the reaction rate constant and the polymer size to the absorbed x-ray dose. The resulting formula for the dissolution rate as a function of dose and temperature is ?t to an extensive experimental data base in order to evaluate a set of unknown global parameters. The results suggest that reaction-assisted dissolution is very important at low doses and low temperatures, the solubility of the unreacted polymer being too small for it to be dissolved at an appreciable rate. However, at high doses or at higher temperatures, the solubility is such that the reaction is no longer needed, and dissolution can take place via the conventional route. These results provide an explanation for the observed dependences of both the dissolution rate and its activation energy on the absorbed dose.

  10. Modeling Spatiotemporal Contextual Dynamics with Sparse-Coded Transfer Learning

    DTIC Science & Technology

    2012-08-08

    this work is the idea that causality of action units can be encoded as a Probabilistic Suffix Tree (PST) with variable temporal scale, while the...it can encode richer and more flexible causal relationships. Here, we model complex human activity as a Probabilistic Suffix Tree (PST) which

  11. A model study of sequential enzyme reactions and electrostatic channeling.

    PubMed

    Eun, Changsun; Kekenes-Huskey, Peter M; Metzger, Vincent T; McCammon, J Andrew

    2014-03-14

    We study models of two sequential enzyme-catalyzed reactions as a basic functional building block for coupled biochemical networks. We investigate the influence of enzyme distributions and long-range molecular interactions on reaction kinetics, which have been exploited in biological systems to maximize metabolic efficiency and signaling effects. Specifically, we examine how the maximal rate of product generation in a series of sequential reactions is dependent on the enzyme distribution and the electrostatic composition of its participant enzymes and substrates. We find that close proximity between enzymes does not guarantee optimal reaction rates, as the benefit of decreasing enzyme separation is countered by the volume excluded by adjacent enzymes. We further quantify the extent to which the electrostatic potential increases the efficiency of transferring substrate between enzymes, which supports the existence of electrostatic channeling in nature. Here, a major finding is that the role of attractive electrostatic interactions in confining intermediate substrates in the vicinity of the enzymes can contribute more to net reactive throughput than the directional properties of the electrostatic fields. These findings shed light on the interplay of long-range interactions and enzyme distributions in coupled enzyme-catalyzed reactions, and their influence on signaling in biological systems.

  12. Reaction-diffusion processes and metapopulation models on duplex networks

    NASA Astrophysics Data System (ADS)

    Xuan, Qi; Du, Fang; Yu, Li; Chen, Guanrong

    2013-03-01

    Reaction-diffusion processes, used to model various spatially distributed dynamics such as epidemics, have been studied mostly on regular lattices or complex networks with simplex links that are identical and invariant in transferring different kinds of particles. However, in many self-organized systems, different particles may have their own private channels to keep their purities. Such division of links often significantly influences the underlying reaction-diffusion dynamics and thus needs to be carefully investigated. This article studies a special reaction-diffusion process, named susceptible-infected-susceptible (SIS) dynamics, given by the reaction steps β→α and α+β→2β, on duplex networks where links are classified into two groups: α and β links used to transfer α and β particles, which, along with the corresponding nodes, consist of an α subnetwork and a β subnetwork, respectively. It is found that the critical point of particle density to sustain reaction activity is independent of the network topology if there is no correlation between the degree sequences of the two subnetworks, and this critical value is suppressed or extended if the two degree sequences are positively or negatively correlated, respectively. Based on the obtained results, it is predicted that epidemic spreading may be promoted on positive correlated traffic networks but may be suppressed on networks with modules composed of different types of diffusion links.

  13. Light radioactive nuclei capture reactions with phenomenological potential models

    SciTech Connect

    Guimaraes, V.; Bertulani, C. A.

    2010-05-21

    Light radioactive nuclei play an important role in many astrophysical environments. Due to very low cross sections of some neutron and proton capture reactions by these radioactive nuclei at energies of astrophysical interest, direct laboratory measurements are very difficult. For radioactive nuclei such as {sup 8}Li and {sup 8}B, the direct measurement of neutron capture reactions is impossible. Indirect methods have been applied to overcome these difficulties. In this work we will report on the results and discussion of phenomenological potential models used to determine some proton and neutron capture reactions. As a test we show the results for the {sup 16}O(p,gamma){sup 17}F{sub gs}(5/2{sup +}) and {sup 16}O(p,gamma){sup 17}F{sub ex}(1/2{sup +}) capture reactions. We also computed the nucleosynthesis cross sections for the {sup 7}Li(n,gamma){sup 8}Li{sub gs}, {sup 8}Li(n,gamma){sup 9}Li{sub gs} and {sup 8}B(p,gamma){sup 9}C{sub gs} capture reactions.

  14. A model study of sequential enzyme reactions and electrostatic channeling

    PubMed Central

    Eun, Changsun; Kekenes-Huskey, Peter M.; Metzger, Vincent T.; McCammon, J. Andrew

    2014-01-01

    We study models of two sequential enzyme-catalyzed reactions as a basic functional building block for coupled biochemical networks. We investigate the influence of enzyme distributions and long-range molecular interactions on reaction kinetics, which have been exploited in biological systems to maximize metabolic efficiency and signaling effects. Specifically, we examine how the maximal rate of product generation in a series of sequential reactions is dependent on the enzyme distribution and the electrostatic composition of its participant enzymes and substrates. We find that close proximity between enzymes does not guarantee optimal reaction rates, as the benefit of decreasing enzyme separation is countered by the volume excluded by adjacent enzymes. We further quantify the extent to which the electrostatic potential increases the efficiency of transferring substrate between enzymes, which supports the existence of electrostatic channeling in nature. Here, a major finding is that the role of attractive electrostatic interactions in confining intermediate substrates in the vicinity of the enzymes can contribute more to net reactive throughput than the directional properties of the electrostatic fields. These findings shed light on the interplay of long-range interactions and enzyme distributions in coupled enzyme-catalyzed reactions, and their influence on signaling in biological systems. PMID:24628210

  15. Statistical model analysis of α -induced reaction cross sections of 64Zn at low energies

    NASA Astrophysics Data System (ADS)

    Mohr, P.; Gyürky, Gy.; Fülöp, Zs.

    2017-01-01

    Background: α -nucleus potentials play an essential role in the calculation of α -induced reaction cross sections at low energies in the statistical model. Uncertainties of these calculations are related to ambiguities in the adjustment of the potential parameters to experimental elastic scattering angular distributions (typically at higher energies) and to the energy dependence of the effective α -nucleus potentials. Purpose: The present work studies cross sections of α -induced reactions for 64Zn at low energies and their dependence on the chosen input parameters of the statistical model calculations. The new experimental data from the recent Atomki experiments allow for a χ2-based estimate of the uncertainties of calculated cross sections at very low energies. Method: Recently measured data for the (α ,γ ), (α ,n ), and (α ,p ) reactions on 64Zn are compared to calculations in the statistical model. A survey of the parameter space of the widely used computer code talys is given, and the properties of the obtained χ2 landscape are discussed. Results: The best fit to the experimental data at low energies shows χ2/F ≈7.7 per data point, which corresponds to an average deviation of about 30% between the best fit and the experimental data. Several combinations of the various ingredients of the statistical model are able to reach a reasonably small χ2/F , not exceeding the best-fit result by more than a factor of 2. Conclusions: The present experimental data for 64Zn in combination with the statistical model calculations allow us to constrain the astrophysical reaction rate within about a factor of 2. However, the significant excess of χ2/F of the best fit from unity demands further improvement of the statistical model calculations and, in particular, the α -nucleus potential.

  16. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model

    ERIC Educational Resources Information Center

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan

    2012-01-01

    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  17. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model

    ERIC Educational Resources Information Center

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan

    2012-01-01

    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  18. The non-power model of the genetic code: a paradigm for interpreting genomic information.

    PubMed

    Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo

    2016-03-13

    In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. © 2016 The Author(s).

  19. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  20. Modeling Large Zinc Isotope Fractionations Associated with Reaction Kinetics

    NASA Astrophysics Data System (ADS)

    Black, J. R.; John, S.; Kavner, A.

    2009-12-01

    Often multiple processes govern isotope fractionation during a chemical reaction, such as the mass-transport, equilibrium and reaction kinetics between reactants and products. Here we use experimental electrochemical techniques to control the mass-transport and reaction kinetics of the electrodeposition reaction: Zn2+ + 2e- = Zn(s); and model the isotopic fractionation measured to resolve the underlying mechanisms of fractionation. Zn was plated on a rotating disc electrode at various applied overpotentials, rotation rates and temperatures. Electroplated Zn was recovered in acid for analysis of the stable isotope composition by multi-collector ICP-MS. The isotopic composition (66Zn/64Zn) of plated metal is reported relative to the stock solution. A large range of Δ66Znsample-stock is observed, with increasing fractionation at higher rotation rates (Fig. 1A), and lower overpotentials (Fig. 1B). Models of electrochemical kinetics, show that these variables control the relative rates of precipitation and diffusion, as defined by the Koutecky-Levich equation [Bard and Faulkner (2001), Electrochemical Methods, John Wiley & Sons]. Using this equation to interpret our data shows that all the data fall along a systematic trend (Fig. 1C) with smaller fractionations in the mass-transport limited regime and larger fractionations produced under electrochemical kinetic control. We extend the Koutecky-Levich model to predict isotope fractionation during electrochemical processes as a function of mass-dependent diffusion (Fig. 2A); standard rate constant (Fig. 2B); and transfer coefficient, describing reaction barrier symmetry (Fig. 2C). A comparison of the model and our data shows that diffusive and equilibrium fractionation alone (Fig. 2A-B) cannot reproduce the trends in our data, but a mass-dependence in α can explain our observations. However, the predicted fractionation is very sensitive to the magnitude of k0 (Fig. 2C). This model predicts how isotope fractionation