Science.gov

Sample records for reaction model code

  1. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.; Herman, M.; Capote, R.

    2014-06-01

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  2. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  3. MOMDIS: a Glauber model computer code for knockout reactions

    NASA Astrophysics Data System (ADS)

    Bertulani, C. A.; Gade, A.

    2006-09-01

    A computer program is described to calculate momentum distributions in stripping and diffraction dissociation reactions. A Glauber model is used with the scattering wavefunctions calculated in the eikonal approximation. The program is appropriate for knockout reactions at intermediate energy collisions ( 30 MeV⩽E/nucleon⩽2000 MeV). It is particularly useful for reactions involving unstable nuclear beams, or exotic nuclei (e.g., neutron-rich nuclei), and studies of single-particle occupancy probabilities (spectroscopic factors) and other related physical observables. Such studies are an essential part of the scientific program of radioactive beam facilities, as in for instance the proposed RIA (Rare Isotope Accelerator) facility in the US. Program summaryTitle of program: MOMDIS (MOMentum DIStributions) Catalogue identifier:ADXZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXZ_v1_0 Computers: The code has been created on an IBM-PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: 6255 No. of bytes in distributed program, including test data, etc.: 63 568 Distribution format: tar.gz Nature of physical problem: The program calculates bound wavefunctions, eikonal S-matrices, total cross-sections and momentum distributions of interest in nuclear knockout reactions at intermediate energies. Method of solution: Solves the radial Schrödinger equation for bound states. A Numerov integration is used outwardly and inwardly and a matching at the nuclear surface is done to obtain the energy and the bound state wavefunction with good accuracy. The S-matrices are obtained using eikonal wavefunctions and the "t- ρρ" method to obtain the eikonal phase-shifts. The momentum distributions are obtained by means of a Gaussian expansion of

  4. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    SciTech Connect

    Rakhno, I. L.; Mokhov, N. V.; Gudima, K. K.

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  5. SurfKin: an ab initio kinetic code for modeling surface reactions.

    PubMed

    Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K

    2014-10-05

    In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts.

  6. The Quantum-Kinetic Chemical Reaction Model for Navier-Stokes Codes

    NASA Astrophysics Data System (ADS)

    Gallis, Michael A.; Wagnild, Ross M.; Torczynski, John R.

    2013-11-01

    The Quantum-Kinetic chemical reaction model of Bird is formulated as a non-equilibrium chemical reaction model for Navier-Stokes codes. The model is based solely on thermophysical, molecular-level information and is capable of reproducing measured equilibrium reaction rates without using any experimentally measured reaction-rate information. The model recognizes the principal role of vibrational energy in overcoming the reaction energy threshold. The effect of rotational non-equilibrium is introduced as a perturbation to the effect of vibrational non-equilibrium. Since the model uses only molecular-level properties, it is inherently able to predict reaction rates for arbitrary non-equilibrium conditions. This ability is demonstrated in the context of both Navier-Stokes and DSMC codes. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations

    NASA Astrophysics Data System (ADS)

    Elmaghraby, Elsayed K.

    2009-09-01

    The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon

  8. Applications of Transport/Reaction Codes to Problems in Cell Modeling

    SciTech Connect

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-11-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

  9. THERM: a computer code for estimating thermodynamic properties for species important to combustion and reaction modeling.

    PubMed

    Ritter, E R

    1991-08-01

    A computer package has been developed called THERM, an acronym for THermodynamic property Estimation for Radicals and Molecules. THERM is a versatile computer code designed to automate the estimation of ideal gas phase thermodynamic properties for radicals and molecules important to combustion and reaction-modeling studies. Thermodynamic properties calculated include heat of formation and entropies at 298 K and heat capacities from 300 to 1500 K. Heat capacity estimates are then extrapolated to above 5000 K, and NASA format polynomial thermodynamic property representations valid from 298 to 5000 K are generated. This code is written in Microsoft Fortran version 5.0 for use on machines running under MSDOS. THERM uses group additivity principles of Benson and current best values for bond strengths, changes in entropy, and loss of vibrational degrees of freedom to estimate properties for radical species from parent molecules. This ensemble of computer programs can be used to input literature data, estimate data when not available, and review, update, and revise entries to reflect improvements and modifications to the group contribution and bond dissociation databases. All input and output files are ASCII so that they can be easily edited, updated, or expanded. In addition, heats of reaction, entropy changes, Gibbs free-energy changes, and equilibrium constants can be calculated as functions of temperature from a NASA format polynomial database.

  10. Uncertainty evaluation of nuclear reaction model parameters using integral and microscopic measurements. Covariances evaluation with CONRAD code

    NASA Astrophysics Data System (ADS)

    de Saint Jean, C.; Habert, B.; Archier, P.; Noguere, G.; Bernard, D.; Tommasi, J.; Blaise, P.

    2010-10-01

    In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic) and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, …) were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.

  11. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  12. An interactive code (NETPATH) for modeling NET geochemical reactions along a flow PATH, version 2.0

    USGS Publications Warehouse

    Plummer, L. Niel; Prestemon, Eric C.; Parkhurst, David L.

    1994-01-01

    NETPATH is an interactive Fortran 77 computer program used to interpret net geochemical mass-balance reactions between an initial and final water along a hydrologic flow path. Alternatively, NETPATH computes the mixing proportions of two to five initial waters and net geochemical reactions that can account for the observed composition of a final water. The program utilizes previously defined chemical and isotopic data for waters from a hydrochemical system. For a set of mineral and (or) gas phases hypothesized to be the reactive phases in the system, NETPATH calculates the mass transfers in every possible combination of the selected phases that accounts for the observed changes in the selected chemical and (or) isotopic compositions observed along the flow path. The calculations are of use in interpreting geochemical reactions, mixing proportions, evaporation and (or) dilution of waters, and mineral mass transfer in the chemical and isotopic evolution of natural and environmental waters. Rayleigh distillation calculations are applied to each mass-balance model that satisfies the constraints to predict carbon, sulfur, nitrogen, and strontium isotopic compositions at the end point, including radiocarbon dating. DB is an interactive Fortran 77 computer program used to enter analytical data into NETPATH, and calculate the distribution of species in aqueous solution. This report describes the types of problems that can be solved, the methods used to solve problems, and the features available in the program to facilitate these solutions. Examples are presented to demonstrate most of the applications and features of NETPATH. The codes DB and NETPATH can be executed in the UNIX or DOS1 environment. This report replaces U.S. Geological Survey Water-Resources Investigations Report 91-4078, by Plummer and others, which described the original release of NETPATH, version 1.0 (dated December, 1991), and documents revisions and enhancements that are included in version 2.0. 1 The

  13. Transfer reaction code with nonlocal interactions

    DOE PAGES

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    Here, we present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N) or (N,d), including nonlocal nucleon-target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are dif- ferential angular distributions for the cross sections of A(d,N)B or B(N,d)A. Details on the implementation of the T-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method aremore » also provided. This code is suitable to be applied for deuteron induced reactions in the range of Ed = 10–70 MeV, and provides cross sections with 4% accuracy.« less

  14. Transfer reaction code with nonlocal interactions

    SciTech Connect

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-07-14

    Here, we present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d,N) or (N,d), including nonlocal nucleon-target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are dif- ferential angular distributions for the cross sections of A(d,N)B or B(N,d)A. Details on the implementation of the T-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided. This code is suitable to be applied for deuteron induced reactions in the range of Ed = 10–70 MeV, and provides cross sections with 4% accuracy.

  15. Transfer reaction code with nonlocal interactions

    NASA Astrophysics Data System (ADS)

    Titus, L. J.; Ross, A.; Nunes, F. M.

    2016-10-01

    We present a suite of codes (NLAT for nonlocal adiabatic transfer) to calculate the transfer cross section for single-nucleon transfer reactions, (d , N) or (N , d) , including nonlocal nucleon-target interactions, within the adiabatic distorted wave approximation. For this purpose, we implement an iterative method for solving the second order nonlocal differential equation, for both scattering and bound states. The final observables that can be obtained with NLAT are differential angular distributions for the cross sections of A(d , N) B or B(N , d) A. Details on the implementation of the T-matrix to obtain the final cross sections within the adiabatic distorted wave approximation method are also provided. This code is suitable to be applied for deuteron induced reactions in the range of Ed =10-70 MeV, and provides cross sections with 4% accuracy.

  16. Molecular codes in biological and chemical reaction networks.

    PubMed

    Görlich, Dennis; Dittrich, Peter

    2013-01-01

    Shannon's theory of communication has been very successfully applied for the analysis of biological information. However, the theory neglects semantic and pragmatic aspects and thus cannot directly be applied to distinguish between (bio-) chemical systems able to process "meaningful" information from those that do not. Here, we present a formal method to assess a system's semantic capacity by analyzing a reaction network's capability to implement molecular codes. We analyzed models of chemical systems (martian atmosphere chemistry and various combustion chemistries), biochemical systems (gene expression, gene translation, and phosphorylation signaling cascades), an artificial chemistry, and random reaction networks. Our study suggests that different chemical systems possess different semantic capacities. No semantic capacity was found in the model of the martian atmosphere chemistry, the studied combustion chemistries, and highly connected random networks, i.e. with these chemistries molecular codes cannot be implemented. High semantic capacity was found in the studied biochemical systems and in random reaction networks where the number of second order reactions is twice the number of species. We conclude that our approach can be applied to evaluate the information processing capabilities of a chemical system and may thus be a useful tool to understand the origin and evolution of meaningful information, e.g. in the context of the origin of life.

  17. A chemical reaction network solver for the astrophysics code NIRVANA

    NASA Astrophysics Data System (ADS)

    Ziegler, U.

    2016-02-01

    Context. Chemistry often plays an important role in astrophysical gases. It regulates thermal properties by changing species abundances and via ionization processes. This way, time-dependent cooling mechanisms and other chemistry-related energy sources can have a profound influence on the dynamical evolution of an astrophysical system. Modeling those effects with the underlying chemical kinetics in realistic magneto-gasdynamical simulations provide the basis for a better link to observations. Aims: The present work describes the implementation of a chemical reaction network solver into the magneto-gasdynamical code NIRVANA. For this purpose a multispecies structure is installed, and a new module for evolving the rate equations of chemical kinetics is developed and coupled to the dynamical part of the code. A small chemical network for a hydrogen-helium plasma was constructed including associated thermal processes which is used in test problems. Methods: Evolving a chemical network within time-dependent simulations requires the additional solution of a set of coupled advection-reaction equations for species and gas temperature. Second-order Strang-splitting is used to separate the advection part from the reaction part. The ordinary differential equation (ODE) system representing the reaction part is solved with a fourth-order generalized Runge-Kutta method applicable for stiff systems inherent to astrochemistry. Results: A series of tests was performed in order to check the correctness of numerical and technical implementation. Tests include well-known stiff ODE problems from the mathematical literature in order to confirm accuracy properties of the solver used as well as problems combining gasdynamics and chemistry. Overall, very satisfactory results are achieved. Conclusions: The NIRVANA code is now ready to handle astrochemical processes in time-dependent simulations. An easy-to-use interface allows implementation of complex networks including thermal processes

  18. PLATYPUS: A code for reaction dynamics of weakly-bound nuclei at near-barrier energies within a classical dynamical model

    NASA Astrophysics Data System (ADS)

    Diaz-Torres, Alexis

    2011-04-01

    A self-contained Fortran-90 program based on a three-dimensional classical dynamical reaction model with stochastic breakup is presented, which is a useful tool for quantifying complete and incomplete fusion, and breakup in reactions induced by weakly-bound two-body projectiles near the Coulomb barrier. The code calculates (i) integrated complete and incomplete fusion cross sections and their angular momentum distribution, (ii) the excitation energy distribution of the primary incomplete-fusion products, (iii) the asymptotic angular distribution of the incomplete-fusion products and the surviving breakup fragments, and (iv) breakup observables, such as angle, kinetic energy and relative energy distributions. Program summaryProgram title: PLATYPUS Catalogue identifier: AEIG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 342 No. of bytes in distributed program, including test data, etc.: 344 124 Distribution format: tar.gz Programming language: Fortran-90 Computer: Any Unix/Linux workstation or PC with a Fortran-90 compiler Operating system: Linux or Unix RAM: 10 MB Classification: 16.9, 17.7, 17.8, 17.11 Nature of problem: The program calculates a wide range of observables in reactions induced by weakly-bound two-body nuclei near the Coulomb barrier. These include integrated complete and incomplete fusion cross sections and their spin distribution, as well as breakup observables (e.g. the angle, kinetic energy, and relative energy distributions of the fragments). Solution method: All the observables are calculated using a three-dimensional classical dynamical model combined with the Monte Carlo sampling of probability-density distributions. See Refs. [1,2] for further details. Restrictions: The

  19. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    SciTech Connect

    Iwamoto, O. Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-15

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  20. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  1. Modeling of surface reactions

    SciTech Connect

    Ray, T.R.

    1993-01-01

    Mathematical models are used to elucidate properties of the monomer-monomer and monomer-dimer type chemical reactions on a two-dimensional surface. The authors use mean-field and lattice gas models, detailing similarities and differences due to correlations in the lattice gas model. The monomer-monomer, or AB surface reaction model, with no diffusion, is investigated for various reaction rates k. Study of the exact rate equations reveals that poisoning always occurs if the adsorption rates of the reactants are unequal. If the adsorption rates of the reactants are equal, simulations show slow poisoning, associated with clustering of reactants. This behavior is also shown for the two-dimensional voter model. The authors analyze precisely the slow poisoning kinetics by an analytic treatment for the AB reaction with infinitesimal reaction rate, and by direct comparison with the voter model. They extend the results to incorporate the effects of place-exchange diffusion, and they compare the AB reaction with infinitesimal reaction rate and no diffusion to the voter model with diffusion at rate 1/2. They also consider the relationship of the voter model to the monomer-dimer model, and investigate the latter model for small reaction rates. The monomer-dimer, or AB[sub 2] surface reaction model is also investigated. Specifically, they consider the ZGB-model for CO-oxidation, and in generalizations of this model which include adspecies diffusion. A theory of nucleation to describe properties of non-equilibrium first-order transitions, specifically the evolution between [open quote]reactive[close quote] steady states and trivial adsorbing states, is derived. The behavior of the [open quote]epidemic[close quote] survival probability, P[sub s], for a non-poisoned patch surrounded by a poisoned background is determined below the poisoning transition.

  2. Cheetah: Starspot modeling code

    NASA Astrophysics Data System (ADS)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  3. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  4. Code System to Calculate Integral Parameters with Reaction Rates from WIMS Output.

    SciTech Connect

    LESZCZYNSKI, FRANCISCO

    1994-10-25

    Version 00 REACTION calculates different integral parameters related to neutron reactions on reactor lattices, from reaction rates calculated with WIMSD4 code, and comparisons with experimental values.

  5. Serpentinization reaction pathways: implications for modeling approach

    SciTech Connect

    Janecky, D.R.

    1986-01-01

    Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.

  6. Visualized kinematics code for two-body nuclear reactions

    NASA Astrophysics Data System (ADS)

    Lee, E. J.; Chae, K. Y.

    2016-05-01

    The one or few nucleon transfer reaction has been a great tool for investigating the single-particle properties of a nucleus. Both stable and exotic beams are utilized to study transfer reactions in normal and inverse kinematics, respectively. Because many energy levels of the heavy recoil from the two-body nuclear reaction can be populated by using a single beam energy, identifying each populated state, which is not often trivial owing to high level-density of the nucleus, is essential. For identification of the energy levels, a visualized kinematics code called VISKIN has been developed by utilizing the Java programming language. The development procedure, usage, and application of the VISKIN is reported.

  7. Biogeochemical Transport and Reaction Model (BeTR) v1

    SciTech Connect

    TANG, JINYUN

    2016-04-18

    The Biogeochemical Transport and Reaction Model (BeTR) is a F90 code that enables reactive transport modeling in land modules of earth system models (e.g. CESM, ACME). The code adopts the Objective-Oriented-Design, and allows users to plug in their own biogeochemical (BGC) formulations/codes, and compare them to other existing BGC codes in those ESMs. The code takes information of soil physics variables, such as variables, such as temperature, moisture, soil density profile; water flow, etc., from a land model to track the movement of different chemicals in presence of biogeochemical reactions.

  8. Impacts of Model Building Energy Codes

    SciTech Connect

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.; Liu, Bing; Bartlett, Rosemarie

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  9. Dual-code quantum computation model

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  10. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  11. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  12. Efficiency of a model human image code

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  13. Genetic coding and gene expression - new Quadruplet genetic coding model

    NASA Astrophysics Data System (ADS)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  14. Model Policy on Student Publications Code.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Education, Des Moines.

    In 1989, the Iowa Legislature created a new code section that defines and regulates student exercise of free expression in "official school publications." Also, the Iowa State Department of Education was directed to develop a model publication code that includes reasonable provisions for regulating the time, place, and manner of student…

  15. Transmutation Fuel Performance Code Thermal Model Verification

    SciTech Connect

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  16. Stochastic Modeling Of Biochemical Reactions

    DTIC Science & Technology

    2006-11-01

    chemical reactions. Often for these reactions, the dynamics of the first M-order statistical moments of the species populations do not form a closed...results a stochastic model for gene expression is investigated. We show that in gene expression mechanisms , in which a protein inhibits its own...chemical reactions [7, 8, 4, 9, 10]. Since one is often interested in only the first and second order statistical moments for the number of molecules of

  17. Turbulent group reaction model of spray dryer

    SciTech Connect

    Ma, H.K.; Huang, H.S.; Chiu, H.H.

    1987-01-01

    A turbulent group reaction model consisting of several sub-models was developed for the prediction of SO/sub 2/ removal efficiency in spray dryers. Mathematical models are developed on the basis of Eulerian-type turbulent Navier-Stokes equations for both gas and condensed phases with interphase transport considerations. The group reaction number, G, is defined as the ratio of the SO/sub 2/ absorption rate to a reference convective mass flux. This number represents the fraction of SO/sub 2/ absorbed into the lime slurry. The model is incorporated into a computer code which permits the investigation of spray dryer design concepts and operating conditions. Hence, it provides a theoretical basis for spray dryer performance optimization and scale-up. This investigation can be a practical guide to achieve high SO/sub 2/ removal efficiency in a spray dryer.

  18. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  19. Reduction of chemical reaction models

    NASA Technical Reports Server (NTRS)

    Frenklach, Michael

    1991-01-01

    An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.

  20. Modelling binary rotating stars by new population synthesis code bonnfires

    NASA Astrophysics Data System (ADS)

    Lau, H. H. B.; Izzard, R. G.; Schneider, F. R. N.

    2013-02-01

    bonnfires, a new generation of population synthesis code, can calculate nuclear reaction, various mixing processes and binary interaction in a timely fashion. We use this new population synthesis code to study the interplay between binary mass transfer and rotation. We aim to compare theoretical models with observations, in particular the surface nitrogen abundance and rotational velocity. Preliminary results show binary interactions may explain the formation of nitrogen-rich slow rotators and nitrogen-poor fast rotators, but more work needs to be done to estimate whether the observed frequencies of those stars can be matched.

  1. Propulsive Reaction Control System Model

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Phan, Linh H.; Serricchio, Frederick; San Martin, Alejandro M.

    2011-01-01

    This software models a propulsive reaction control system (RCS) for guidance, navigation, and control simulation purposes. The model includes the drive electronics, the electromechanical valve dynamics, the combustion dynamics, and thrust. This innovation follows the Mars Science Laboratory entry reaction control system design, and has been created to meet the Mars Science Laboratory (MSL) entry, descent, and landing simulation needs. It has been built to be plug-and-play on multiple MSL testbeds [analysis, Monte Carlo, flight software development, hardware-in-the-loop, and ATLO (assembly, test and launch operations) testbeds]. This RCS model is a C language program. It contains two main functions: the RCS electronics model function that models the RCS FPGA (field-programmable-gate-array) processing and commanding of the RCS valve, and the RCS dynamic model function that models the valve and combustion dynamics. In addition, this software provides support functions to initialize the model states, set parameters, access model telemetry, and access calculated thruster forces.

  2. NUCLEAR REACTION MODELING FOR RIA ISOL TARGET DESIGN

    SciTech Connect

    S. MASHNIK; ET AL

    2001-03-01

    Los Alamos scientists are collaborating with researchers at Argonne and Oak Ridge on the development of improved nuclear reaction physics for modeling radionuclide production in ISOL targets. This is being done in the context of the MCNPX simulation code, which is a merger of MCNP and the LAHET intranuclear cascade code, and simulates both nuclear reaction cross sections and radiation transport in the target. The CINDER code is also used to calculate the time-dependent nuclear decays for estimating induced radioactivities. They give an overview of the reaction physics improvements they are addressing, including intranuclear cascade (INC) physics, where recent high-quality inverse-kinematics residue data from GSI have led to INC spallation and fission model improvements; and preequilibrium reactions important in modeling (p,xn) and (p,xnyp) cross sections for the production of nuclides far from stability.

  3. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code jurisdictions. If a lender or other interested party is notified that a State or local building code has...

  4. Dynamical model of surrogate reactions

    SciTech Connect

    Aritomo, Y.; Chiba, S.; Nishio, K.

    2011-08-15

    A new dynamical model is developed to describe the whole process of surrogate reactions: Transfer of several nucleons at an initial stage, thermal equilibration of residues leading to washing out of shell effects, and decay of populated compound nuclei are treated in a unified framework. Multidimensional Langevin equations are employed to describe time evolution of collective coordinates with a time-dependent potential energy surface corresponding to different stages of surrogate reactions. The new model is capable of calculating spin distributions of the compound nuclei, one of the most important quantities in the surrogate technique. Furthermore, various observables of surrogate reactions can be calculated, for example, energy and angular distribution of ejectile and mass distributions of fission fragments. These features are important to assess validity of the proposed model itself, to understand mechanisms of the surrogate reactions, and to determine unknown parameters of the model. It is found that spin distributions of compound nuclei produced in {sup 18}O+{sup 238}U{yields}{sup 16}O+{sup 240}*U and {sup 18}O+{sup 236}U{yields}{sup 16}O+{sup 238}*U reactions are equivalent and much less than 10({h_bar}/2{pi}) and therefore satisfy conditions proposed by Chiba and Iwamoto [Phys. Rev. C 81, 044604 (2010)] if they are used as a pair in the surrogate ratio method.

  5. PP: A graphics post-processor for the EQ6 reaction path code

    SciTech Connect

    Stockman, H.W.

    1994-09-01

    The PP code is a graphics post-processor and plotting program for EQ6, a popular reaction-path code. PP runs on personal computers, allocates memory dynamically, and can handle very large reaction path runs. Plots of simple variable groups, such as fluid and solid phase composition, can be obtained with as few as two keystrokes. Navigation through the list of reaction path variables is simple and efficient. Graphics files can be exported for inclusion in word processing documents and spreadsheets, and experimental data may be imported and superposed on the reaction path runs. The EQ6 thermodynamic database can be searched from within PP, to simplify interpretation of complex plots.

  6. Modelling reaction kinetics inside cells

    PubMed Central

    Grima, Ramon; Schnell, Santiago

    2009-01-01

    In the past decade, advances in molecular biology such as the development of non-invasive single molecule imaging techniques have given us a window into the intricate biochemical activities that occur inside cells. In this article we review four distinct theoretical and simulation frameworks: (1) non-spatial and deterministic, (2) spatial and deterministic, (3) non-spatial and stochastic and (4) spatial and stochastic. Each framework can be suited to modelling and interpreting intracellular reaction kinetics. By estimating the fundamental length scales, one can roughly determine which models are best suited for the particular reaction pathway under study. We discuss differences in prediction between the four modelling methodologies. In particular we show that taking into account noise and space does not simply add quantitative predictive accuracy but may also lead to qualitatively different physiological predictions, unaccounted for by classical deterministic models. PMID:18793122

  7. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... codes are free of coding errors and produce stable solutions; (v) Conceptual models have undergone...

  8. 28 CFR 36.608 - Guidance concerning model codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608... Codes § 36.608 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...

  9. Rapid installation of numerical models in multiple parent codes

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  10. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  11. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted...

  12. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  13. Modeling the complex bromate-iodine reaction.

    PubMed

    Machado, Priscilla B; Faria, Roberto B

    2009-05-07

    In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model.

  14. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... Chapter 3. (e) Materials standards Chapter 26. (f) Construction components Part III. (g) Glass Chapter 2... dwellings (NFPA 70A-1990)....

  15. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  16. Population Coding of Visual Space: Modeling

    PubMed Central

    Lehky, Sidney R.; Sereno, Anne B.

    2011-01-01

    We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012

  17. Reaction Wheel Disturbance Model Extraction Software - RWDMES

    NASA Technical Reports Server (NTRS)

    Blaurock, Carl

    2009-01-01

    densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.

  18. 49 CFR 41.120 - Acceptable model codes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false Acceptable model codes. 41.120 Section 41.120 Transportation Office of the Secretary of Transportation SEISMIC SAFETY § 41.120 Acceptable model codes. (a) This... of this part. (b)(1) The following are model codes which have been found to provide a level...

  19. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in...

  20. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA...

  1. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    SciTech Connect

    Nasrabadi, M. N. Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  2. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  3. Computerized reduction of elementary reaction sets for combustion modeling

    NASA Technical Reports Server (NTRS)

    Wikstrom, Carl V.

    1991-01-01

    If the entire set of elementary reactions is to be solved in the modeling of chemistry in computational fluid dynamics, a set of stiff ordinary differential equations must be integrated. Some of the reactions take place at very high rates, requiring short time steps, while others take place more slowly and make little progress in the short time step integration. The goal is to develop a procedure to automatically obtain sets of finite rate equations, consistent with a partial equilibrium assumptions, from an elementary set appropriate to local conditions. The possibility of computerized reaction reduction was demonstrated. However, the ability to use the reduced reaction set depends on the ability of the CFD approach in incorporate partial equilibrium calculations into the computer code. Therefore, the results should be tested on a code with partial equilibrium capability.

  4. Direct containment heating models in the CONTAIN code

    SciTech Connect

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  5. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Guidance concerning model codes. 36.607... BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of...

  6. ER@CEBAF: Modeling code developments

    SciTech Connect

    Meot, F.; Roblin, Y.

    2016-04-13

    A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  7. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  8. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  9. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  10. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  11. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  12. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  13. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    DOE PAGES

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.; ...

    2013-11-08

    Proton double-differential cross sections from 59Co(α,p)62Ni, 57Fe(α,p)60Co, 56Fe(7Li,p)62Ni, and 55Mn(6Li,p)60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtained with a Monte Carlo technique. Furthermore,more » excitation energy dependencies were found to be inconsistent with the Fermi-gas model.« less

  14. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    NASA Astrophysics Data System (ADS)

    Plante, Ianik; Devroye, Luc

    2015-09-01

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  15. On the Green's function of the partially diffusion-controlled reversible ABCD reaction for radiation chemistry codes

    SciTech Connect

    Plante, Ianik; Devroye, Luc

    2015-09-15

    Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.

  16. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR...

  17. Genetic code: an alternative model of translation.

    PubMed

    Damjanović, Zvonimir M; Rakocević, Miloje M

    2005-06-01

    Our earlier studies of translation have led us to a specific numeric coding of nucleotides (A = 0, C = 1, G = 2, and U = 3)--that is, a quaternary numeric system; to ordering of digrams and codons (read right to left: .yx and Z.yx) as ordinal numbers from 000 to 111; and to seek hypothetic transformation of mRNA to 20 canonic amino acids. In this work, we show that amino acids match the ordinal number--that is, follow as transforms of their respective digrams and/or mRNA-codons. Sixteen digrams and their respective amino acids appear as a parallel (discrete) array. A first approximation of translation in this view is demonstrated by a "twisted" spiral on the side of "phantom" codons and by ordering amino acids in the form of a cross on the other side, whereby the transformation of digrams and/or phantom codons to amino acids appears to be one-to-one! Classification of canonical amino acids derived from our dynamic model clarifies physicochemical criteria, such as purinity, pyrimidinity, and particularly codon rules. The system implies both the rules of Siemion and Siemion and of Davidov, as well as balances of atomic and nucleon numbers within groups of amino acids. Formalization in this system offers the possibility of extrapolating backward to the initial organization of heredity.

  18. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  19. Review and verification of CARE 3 mathematical model and code

    NASA Technical Reports Server (NTRS)

    Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.

    1983-01-01

    The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.

  20. A draft model aggregated code of ethics for bioethicists.

    PubMed

    Baker, Robert

    2005-01-01

    Bioethicists function in an environment in which their peers--healthcare executives, lawyers, nurses, physicians--assert the integrity of their fields through codes of professional ethics. Is it time for bioethics to assert its integrity by developing a code of ethics? Answering in the affirmative, this paper lays out a case by reviewing the historical nature and function of professional codes of ethics. Arguing that professional codes are aggregative enterprises growing in response to a field's historical experiences, it asserts that bioethics now needs to assert its integrity and independence and has already developed a body of formal statements that could be aggregated to create a comprehensive code of ethics for bioethics. A Draft Model Aggregated Code of Ethics for Bioethicists is offered in the hope that analysis and criticism of this draft code will promote further discussion of the nature and content of a code of ethics for bioethicists.

  1. Numerical MHD codes for modeling astrophysical flows

    NASA Astrophysics Data System (ADS)

    Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.

    2016-05-01

    We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.

  2. Status report on the THROHPUT transient heat pipe modeling code

    SciTech Connect

    Hall, M.L.; Merrigan, M.A.; Reid, R.S.

    1993-11-01

    Heat pipes are structures which transport heat by the evaporation and condensation of a working fluid, giving them a high effective thermal conductivity. Many space-based uses for heat pipes have been suggested, and high temperature heat pipes using liquid metals as working fluids are especially attractive for these purposes. These heat pipes are modeled by the THROHPUT code (THROHPUT is an acronym for Thermal Hydraulic Response Of Heat Pipes Under Transients and is pronounced like ``throughput``). Improvements have been made to the THROHPUT code which models transient thermohydraulic heat pipe behavior. The original code was developed as a doctoral thesis research code by Hall. The current emphasis has been shifted from research into the numerical modeling to the development of a robust production code. Several modeling obstacles that were present in the original code have been eliminated, and several additional features have been added.

  3. Energy standards and model codes development, adoption, implementation, and enforcement

    SciTech Connect

    Conover, D.R.

    1994-08-01

    This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.

  4. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  5. Modeling shock-driven reaction in low density PMDI foam

    NASA Astrophysics Data System (ADS)

    Brundage, Aaron; Alexander, C. Scott; Reinhart, William; Peterson, David

    Shock experiments on low density polyurethane foams reveal evidence of reaction at low impact pressures. However, these reaction thresholds are not evident over the low pressures reported for historical Hugoniot data of highly distended polyurethane at densities below 0.1 g/cc. To fill this gap, impact data given in a companion paper for polymethylene diisocyanate (PMDI) foam with a density of 0.087 g/cc were acquired for model validation. An equation of state (EOS) was developed to predict the shock response of these highly distended materials over the full range of impact conditions representing compaction of the inert material, low-pressure decomposition, and compression of the reaction products. A tabular SESAME EOS of the reaction products was generated using the JCZS database in the TIGER equilibrium code. In particular, the Arrhenius Burn EOS, a two-state model which transitions from an unreacted to a reacted state using single step Arrhenius kinetics, as implemented in the shock physics code CTH, was modified to include a statistical distribution of states. Hence, a single EOS is presented that predicts the onset to reaction due to shock loading in PMDI-based polyurethane foams. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.

  6. Secondary neutron source modelling using MCNPX and ALEPH codes

    NASA Astrophysics Data System (ADS)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  7. Capture and documentation of coded data on adverse drug reactions: an overview.

    PubMed

    Paul, Lindsay; Robinson, Kerin M

    2012-01-01

    Allergic responses to prescription drugs are largely preventable, and incur significant cost to the community both financially and in terms of healthcare outcomes. The capacity to minimise the effects of repeated events rests predominantly with the reliability of allergy documentation in medical records and computerised physician order entry systems (CPOES) with decision support such as allergy alerts. This paper presents an overview of the nature and extent of adverse drug reactions (ADRs) in Australia and other developed countries, a discussion and evaluation of strategies which have been devised to address this issue, and a commentary on the role of coded data in informing this patient safety issue. It is not concerned with pharmacovigilance systems that monitor ADRs on a global scale. There are conflicting reports regarding the efficacy of these strategies. Although in many cases allergy alerts are effective, lack of sensitivity and contextual relevance can often induce doctors to override alerts. Human factors such as user fatigue and inadequate adverse drug event reporting, including ADRs, are commonplace. The quality of and response to allergy documentation can be enhanced by the participation of nurses and pharmacists, particularly in medication reconciliation. The International Classification of Diseases (ICD) coding of drug allergies potentially yields valuable evidence, but the quality of local and national level coded data is hampered by under-documenting and under-coding.

  8. A Networks Approach to Modeling Enzymatic Reactions.

    PubMed

    Imhof, P

    2016-01-01

    Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes.

  9. A model for reaction rates in turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Chinitz, W.; Evans, J. S.

    1984-01-01

    To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

  10. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  11. System Data Model (SDM) Source Code

    DTIC Science & Technology

    2012-08-23

    subject 407: ecode pointer to current position in compiled code 408: mstart pointer to the current match start position (can be...repeated call or recursion limit) 425: */ 426: 427: static int 428: match(REGISTER USPTR eptr, REGISTER const uschar * ecode , const uschar *mstart...variables */ 453: 454: frame->Xeptr = eptr; 455: frame->Xecode = ecode ; 456: frame->Xmstart = mstart; 457: frame->Xoffset_top = offset_top; 458

  12. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  13. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    SciTech Connect

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  14. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    DOE PAGES

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  15. Modeling of turbulent chemical reaction

    NASA Technical Reports Server (NTRS)

    Chen, J.-Y.

    1995-01-01

    Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.

  16. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.

  17. Conversations about Code-Switching: Contrasting Ideologies of Purity and Authenticity in Basque Bilinguals' Reactions to Bilingual Speech

    ERIC Educational Resources Information Center

    Lantto, Hanna

    2016-01-01

    This study examines the manifestations of purity and authenticity in 47 Basque bilinguals' reactions to code-switching. The respondents listened to two speech extracts with code-switching, filled in a short questionnaire and talked about the extracts in small groups. These conversations were then recorded. The respondents' beliefs can be…

  18. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  19. Animal models of idiosyncratic drug reactions.

    PubMed

    Ng, Winnie; Lobach, Alexandra R M; Zhu, Xu; Chen, Xin; Liu, Feng; Metushi, Imir G; Sharma, Amy; Li, Jinze; Cai, Ping; Ip, Julia; Novalen, Maria; Popovic, Marija; Zhang, Xiaochu; Tanino, Tadatoshi; Nakagawa, Tetsuya; Li, Yan; Uetrecht, Jack

    2012-01-01

    If we could predict and prevent idiosyncratic drug reactions (IDRs) it would have a profound effect on drug development and therapy. Given our present lack of mechanistic understanding, this goal remains elusive. Hypothesis testing requires valid animal models with characteristics similar to the idiosyncratic reactions that occur in patients. Although it has not been conclusively demonstrated, it appears that almost all IDRs are immune-mediated, and a dominant characteristic is a delay between starting the drug and the onset of the adverse reaction. In contrast, most animal models are acute and therefore involve a different mechanism than idiosyncratic reactions. There are, however, a few animal models such as the nevirapine-induced skin rash in rats that have characteristics very similar to the idiosyncratic reaction that occurs in humans and presumably have a very similar mechanism. These models have allowed testing hypotheses that would be impossible to test in any other way. In addition there are models in which there is a delayed onset of mild hepatic injury that resolves despite continued treatment similar to the "adaptation" reactions that are more common than severe idiosyncratic hepatotoxicity in humans. This probably represents the development of immune tolerance. However, most attempts to develop animal models by stimulating the immune system have been failures. A specific combination of MHC and T cell receptor may be required, but it is likely more complex. Animal studies that determine the requirements for an immune response would provide vital clues about risk factors for IDRs in patients.

  20. Cavitation Modeling in Euler and Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  1. Two-dimensional MHD generator model. [GEN code

    SciTech Connect

    Geyer, H. K.; Ahluwalia, R. K.; Doss, E. D.

    1980-09-01

    A steady state, two-dimensional MHD generator code, GEN, is presented. The code solves the equations of conservation of mass, momentum, and energy, using a Von Mises transformation and a local linearization of the equations. By splitting the source terms into a part proportional to the axial pressure gradient and a part independent of the gradient, the pressure distribution along the channel is easily obtained to satisfy various criteria. Thus, the code can run effectively in both design modes, where the channel geometry is determined, and analysis modes, where the geometry is previously known. The code also employs a mixing length concept for turbulent flows, Cebeci and Chang's wall roughness model, and an extension of that model to the effective thermal diffusities. Results on code validation, as well as comparisons of skin friction and Stanton number calculations with experimental results, are presented.

  2. EM modeling for GPIR using 3D FDTD modeling codes

    SciTech Connect

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  3. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  4. Quantization and psychoacoustic model in audio coding in advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2011-10-01

    This paper presents complete optimized architecture of Advanced Audio Coder quantization with Huffman coding. After that psychoacoustic model theory is presented and few algorithms described: standard Two Loop Search, its modifications, Genetic, Just Noticeable Level Difference, Trellis-Based and its modification: Cascaded Trellis-Based Algorithm.

  5. Quasiglobal reaction model for ethylene combustion

    NASA Technical Reports Server (NTRS)

    Singh, D. J.; Jachimowski, Casimir J.

    1994-01-01

    The objective of this study is to develop a reduced mechanism for ethylene oxidation. The authors are interested in a model with a minimum number of species and reactions that still models the chemistry with reasonable accuracy for the expected combustor conditions. The model will be validated by comparing the results to those calculated with a detailed kinetic model that has been validated against the experimental data.

  6. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    SciTech Connect

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  7. A unified model of the standard genetic code

    PubMed Central

    Morgado, Eberto R.

    2017-01-01

    The Rodin–Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  8. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  9. Feasibility study of nuclear transmutation by negative muon capture reaction using the PHITS code

    NASA Astrophysics Data System (ADS)

    Abe, Shin-ichiro; Sato, Tatsuhiko

    2016-06-01

    Feasibility of nuclear transmutation of fission products in high-level radioactive waste by negative muon capture reaction is investigated using the Particle and Heave Ion Transport code System (PHITS). It is found that about 80 % of stopped negative muons contribute to transmute target nuclide into stable or short-lived nuclide in the case of 135Cs, which is one of the most important nuclide in the transmutation. The simulation result also indicates that the position of transmutation is controllable by changing the energy of incident negative muon. Based on our simulation, it takes approximately 8.5 × 108years to transmute 500 g of 135Cs by negative muon beam with the highest intensity currently available.

  10. Deuteron induced reactions on Ho and La: Experimental excitation functions and comparison with code results

    NASA Astrophysics Data System (ADS)

    Hermanne, A.; Adam-Rebeles, R.; Tarkanyi, F.; Takacs, S.; Csikai, J.; Takacs, M. P.; Ignatyuk, A.

    2013-09-01

    Activation products of rare earth elements are gaining importance in medical and technical applications. In stacked foil irradiations, followed by high resolution gamma spectroscopy, the cross-sections for production of 161,165Er, 166gHo on 165Ho and 135,137m,137g,139Ce, 140La, 133m,133g,cumBa and 136Cs on natLa targets were measured up to 50 MeV. Reduced uncertainty is obtained by simultaneous remeasurement of the 27Al(d,x)24,22Na monitor reactions over the whole energy range. A comparison with experimental literature values and results from updated theoretical codes (ALICE-D, EMPIRE-D and the TENDL2012 online library) is discussed.

  11. Experimental differential cross sections, level densities, and spin cutoffs as a testing ground for nuclear reaction codes

    SciTech Connect

    Voinov, Alexander V.; Grimes, Steven M.; Brune, Carl R.; Burger, Alexander; Gorgen, Andreas; Guttormsen, Magne; Larsen, Ann -Cecilie; Massey, Thomas N.; Siem, Sunniva

    2013-11-08

    Proton double-differential cross sections from 59Co(α,p)62Ni, 57Fe(α,p)60Co, 56Fe(7Li,p)62Ni, and 55Mn(6Li,p)60Co reactions have been measured with 21-MeV α and 15-MeV lithium beams. Cross sections have been compared against calculations with the empire reaction code. Different input level density models have been tested. It was found that the Gilbert and Cameron [A. Gilbert and A. G. W. Cameron, Can. J. Phys. 43, 1446 (1965)] level density model is best to reproduce experimental data. Level densities and spin cutoff parameters for 62Ni and 60Co above the excitation energy range of discrete levels (in continuum) have been obtained with a Monte Carlo technique. Furthermore, excitation energy dependencies were found to be inconsistent with the Fermi-gas model.

  12. RHOCUBE: 3D density distributions modeling code

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  13. A C library for retrieving specific reactions from the BioModels database

    PubMed Central

    Neal, M. L.; Galdzicki, M.; Gallimore, J. T.; Sauro, H. M.

    2014-01-01

    Summary: We describe libSBMLReactionFinder, a C library for retrieving specific biochemical reactions from the curated systems biology markup language models contained in the BioModels database. The library leverages semantic annotations in the database to associate reactions with human-readable descriptions, making the reactions retrievable through simple string searches. Our goal is to provide a useful tool for quantitative modelers who seek to accelerate modeling efforts through the reuse of previously published representations of specific chemical reactions. Availability and implementation: The library is open-source and dual licensed under the Mozilla Public License Version 2.0 and GNU General Public License Version 2.0. Project source code, downloads and documentation are available at http://code.google.com/p/lib-sbml-reaction-finder. Contact: mneal@uw.edu PMID:24078714

  14. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  15. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  16. Spin-glass models as error-correcting codes

    NASA Astrophysics Data System (ADS)

    Sourlas, Nicolas

    1989-06-01

    DURING the transmission of information, errors may occur because of the presence of noise, such as thermal noise in electronic signals or interference with other sources of radiation. One wants to recover the information with the minimum error possible. In theory this is possible by increasing the power of the emitter source. But as the cost is proportional to the energy fed into the channel, it costs less to code the message before sending it, thus including redundant 'coding' bits, and to decode at the end. Coding theory provides rigorous bounds on the cost-effectiveness of any code. The explicit codes proposed so far for practical applications do not saturate these bounds; that is, they do not achieve optimal cost-efficiency. Here we show that theoretical models of magnetically disordered materials (spin glasses) provide a new class of error-correction codes. Their cost performance can be calculated using the methods of statistical mechanics, and is found to be excellent. These models can, under certain circumstances, constitute the first known codes to saturate Shannon's well-known cost-performance bounds.

  17. NeuCode labels with parallel reaction monitoring for multiplexed, absolute protein quantification

    PubMed Central

    Potts, Gregory K.; Voigt, Emily A.; Bailey, Derek J.; Westphall, Michael S.; Hebert, Alexander S.; Yin, John; Coon, Joshua J.

    2016-01-01

    We introduce a new method to multiplex the throughput of samples for targeted mass spectrometry analysis. The current paradigm for obtaining absolute quantification from biological samples requires spiking isotopically heavy peptide standards into light biological lysates. Because each lysate must be run individually, this method places limitations on sample throughput and high demands on instrument time. When cell lines are first metabolically labeled with various neutron-encoded (NeuCode) lysine isotopologues possessing mDa mass differences from each other, heavy cell lysates may be mixed and spiked with an additional heavy peptide as an internal standard. We demonstrate that these NeuCode lysate peptides may be co-isolated with their internal standards, fragmented, and analyzed together using high resolving power parallel reaction monitoring (PRM). Instead of running each sample individually, these methods allow samples to be multiplexed to obtain absolute concentrations of target peptides in 5, 15, and even 25 biological samples at a time during single mass spectrometry experiments. PMID:26882330

  18. Modeling Guidelines for Code Generation in the Railway Signaling Context

    NASA Technical Reports Server (NTRS)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  19. Simple models for reading neuronal population codes.

    PubMed Central

    Seung, H S; Sompolinsky, H

    1993-01-01

    In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models. PMID:8248166

  20. DSD - A Particle Simulation Code for Modeling Dusty Plasmas

    NASA Astrophysics Data System (ADS)

    Joyce, Glenn; Lampe, Martin; Ganguli, Gurudas

    1999-11-01

    The NRL Dynamically Shielded Dust code (DSD) is a particle simulation code developed to study the behavior of strongly coupled, dusty plasmas. The model includes the electrostatic wake effects of plasma ions flowing through plasma electrons, collisions of dust and plasma particles with each other and with neutrals. The simulation model contains the short-range strong forces of a shielded Coulomb system, and the long-range forces that are caused by the wake. It also includes other effects of a flowing plasma such as drag forces. In order to model strongly coupled dust in plasmas, we make use of the techniques of molecular dynamics simulation, PIC simulation, and the "particle-particle/particle-mesh" (P3M) technique of Hockney and Eastwood. We also make use of the dressed test particle representation of Rostoker and Rosenbluth. Many of the techniques we use in the model are common to all PIC plasma simulation codes. The unique properties of the code follow from the accurate representation of both the short-range aspects of the interaction between dust grains, and long-range forces mediated by the complete plasma dielectric response. If the streaming velocity is zero, the potential used in the model reduces to the Debye-Huckel potential, and the simulation is identical to molecular dynamics models of the Yukawa potential. The plasma appears only implicitly through the plasma dispersion function, so it is not necessary in the code to resolve the fast plasma time scales.

  1. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  2. Code System to Model Aqueous Geochemical Equilibria.

    SciTech Connect

    PETERSON, S. R.

    2001-08-23

    Version: 00 MINTEQ is a geochemical program to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. It was developed for the Environmental Protection Agency to perform the calculations necessary to simulate the contact of waste solutions with heterogeneous sediments or the interaction of ground water with solidified wastes. MINTEQ can calculate ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution ofsolid phases. MINTEQ can accept a finite mass for any solid considered for dissolution and will dissolve the specified solid phase only until its initial mass is exhausted. This ability enables MINTEQ to model flow-through systems. In these systems the masses of solid phases that precipitate at earlier pore volumes can be dissolved at later pore volumes according to thermodynamic constraints imposed by the solution composition and solid phases present. The ability to model these systems permits evaluation of the geochemistry of dissolved traced metals, such as low-level waste in shallow land burial sites. MINTEQ was designed to solve geochemical equilibria for systems composed of one kilogram of water, various amounts of material dissolved in solution, and any solid materials that are present. Systems modeled using MINTEQ can exchange energy and material (open systems) or just energy (closed systems) with the surrounding environment. Each system is composed of a number of phases. Every phase is a region with distinct composition and physically definable boundaries. All of the material in the aqueous solution forms one phase. The gas phase is composed of any gaseous material present, and each compositionally and structurally distinct solid forms a separate phase.

  3. A comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reaction flows

    NASA Technical Reports Server (NTRS)

    Magnotti, F.; Diskin, G.; Matulaitis, J.; Chinitz, W.

    1984-01-01

    The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

  4. Comprehensive model to determine the effects of temperature and species fluctuations on reaction rates in turbulent reaction flows

    SciTech Connect

    Magnotti, F.; Diskin, G.; Matulaitis, J.; Chinitz, W.

    1984-01-01

    The use of silane (SiH4) as an effective ignitor and flame stabilizing pilot fuel is well documented. A reliable chemical kinetic mechanism for prediction of its behavior at the conditions encountered in the combustor of a SCRAMJET engine was calculated. The effects of hydrogen addition on hydrocarbon ignition and flame stabilization as a means for reduction of lengthy ignition delays and reaction times were studied. The ranges of applicability of chemical kinetic models of hydrogen-air combustors were also investigated. The CHARNAL computer code was applied to the turbulent reaction rate modeling.

  5. Coupling extended magnetohydrodynamic fluid codes with radiofrequency ray tracing codes for fusion modeling

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Held, Eric D.

    2015-09-01

    Neoclassical tearing modes are macroscopic (L ∼ 1 m) instabilities in magnetic fusion experiments; if unchecked, these modes degrade plasma performance and may catastrophically destroy plasma confinement by inducing a disruption. Fortunately, the use of properly tuned and directed radiofrequency waves (λ ∼ 1 mm) can eliminate these modes. Numerical modeling of this difficult multiscale problem requires the integration of separate mathematical models for each length and time scale (Jenkins and Kruger, 2012 [21]); the extended MHD model captures macroscopic plasma evolution while the RF model tracks the flow and deposition of injected RF power through the evolving plasma profiles. The scale separation enables use of the eikonal (ray-tracing) approximation to model the RF wave propagation. In this work we demonstrate a technique, based on methods of computational geometry, for mapping the ensuing RF data (associated with discrete ray trajectories) onto the finite-element/pseudospectral grid that is used to model the extended MHD physics. In the new representation, the RF data can then be used to construct source terms in the equations of the extended MHD model, enabling quantitative modeling of RF-induced tearing mode stabilization. Though our specific implementation uses the NIMROD extended MHD (Sovinec et al., 2004 [22]) and GENRAY RF (Smirnov et al., 1994 [23]) codes, the approach presented can be applied more generally to any code coupling requiring the mapping of ray tracing data onto Eulerian grids.

  6. Heavy Ion Reaction Modeling for Hadrontherapy Applications

    SciTech Connect

    Cerutti, F.; Ferrari, A.; Enghardt, W.; Gadioli, E.; Mairani, A.; Parodi, K.; Sommerer, F.

    2007-10-26

    A comprehensive and reliable description of nucleus-nucleus interactions represents a crucial need in different interdisciplinary fields. In particular, hadrontherapy monitoring by means of in-beam positron emission tomography (PET) requires, in addition to measuring, the capability of calculating the activity of {beta}{sup +}-decaying nuclei produced in the irradiated tissue. For this purpose, in view of treatment monitoring at the Heidelberg Ion Therapy (HIT) facility, the transport and interaction Monte Carlo code FLUKA is a promising candidate. It is provided with the description of heavy ion reactions at intermediate and low energies by two specific event generators. In-beam PET experiments performed at GSI for a few beam-target combinations have been simulated and first comparisons between the measured and calculated {beta}{sup +}-activity are available.

  7. Model-building codes for membrane proteins.

    SciTech Connect

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S.; Slepoy, Alexander; Sale, Kenneth L.; Young, Malin M.; Faulon, Jean-Loup Michel; Gray, Genetha Anne

    2005-01-01

    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  8. Multiview coding mode decision with hybrid optimal stopping model.

    PubMed

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  9. Towards many-body based nuclear reaction modelling

    NASA Astrophysics Data System (ADS)

    Hilaire, Stéphane; Goriely, Stéphane

    2016-06-01

    The increasing need for cross sections far from the valley of stability poses a challenge for nuclear reaction models. So far, predictions of cross sections have relied on more or less phenomenological approaches, depending on parameters adjusted to available experimental data or deduced from systematic expressions. While such predictions are expected to be reliable for nuclei not too far from the experimentally known regions, it is clearly preferable to use more fundamental approaches, based on sound physical principles, when dealing with very exotic nuclei. Thanks to the high computer power available today, all the ingredients required to model a nuclear reaction can now be (and have been) microscopically (or semi-microscopically) determined starting from the information provided by a nucleon-nucleon effective interaction. This concerns nuclear masses, optical model potential, nuclear level densities, photon strength functions, as well as fission barriers. All these nuclear model ingredients, traditionally given by phenomenological expressions, now have a microscopic counterpart implemented in the TALYS nuclear reaction code. We are thus now able to perform fully microscopic cross section calculations. The quality of these ingredients and the impact of using them instead of the usually adopted phenomenological parameters will be discussed. Perspectives for the coming years will be drawn on the improvements one can expect.

  10. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  11. Theory and modeling of stereoselective organic reactions.

    PubMed

    Houk, K N; Paddon-Row, M N; Rondan, N G; Wu, Y D; Brown, F K; Spellmeyer, D C; Metz, J T; Li, Y; Loncharich, R J

    1986-03-07

    Theoretical investigations of the transition structures of additions and cycloadditions reveal details about the geometries of bond-forming processes that are not directly accessible by experiment. The conformational analysis of transition states has been developed from theoretical generalizations about the preferred angle of attack by reagents on multiple bonds and predictions of conformations with respect to partially formed bonds. Qualitative rules for the prediction of the stereochemistries of organic reactions have been devised, and semi-empirical computational models have also been developed to predict the stereoselectivities of reactions of large organic molecules, such as nucleophilic additions to carbonyls, electrophilic hydroborations and cycloadditions, and intramolecular radical additions and cycloadditions.

  12. Benchmarking of numerical codes against analytical solutions for multidimensional multicomponent diffusive transport coupled with precipitation-dissolution reactions and porosity changes

    NASA Astrophysics Data System (ADS)

    Hayek, M.; Kosakowski, G.; Jakob, A.; Churakov, S.

    2012-04-01

    Numerical computer codes dealing with precipitation-dissolution reactions and porosity changes in multidimensional reactive transport problems are important tools in geoscience. Recent typical applications are related to CO2 sequestration, shallow and deep geothermal energy, remediation of contaminated sites or the safe underground storage of chemotoxic and radioactive waste. Although the agreement between codes using the same models and similar numerical algorithms is satisfactory, it is known that the numerical methods used in solving the transport equation, as well as different coupling schemes between transport and chemistry, may lead to systematic discrepancies. Moreover, due to their inability to describe subgrid pore space changes correctly, the numerical approaches predict discretization-dependent values of porosity changes and clogging times. In this context, analytical solutions become an essential tool to verify numerical simulations. We present a benchmark study where we compare a two-dimensional analytical solution for diffusive transport of two solutes coupled with a precipitation-dissolution reaction causing porosity changes with numerical solutions obtained with the COMSOL Multiphysics code and with the reactive transport code OpenGeoSys-GEMS. The analytical solution describes the spatio-temporal evolution of solutes and solid concentrations and porosity. We show that both numerical codes reproduce the analytical solution very well, although distinct differences in accuracy can be traced back to specific numerical implementations.

  13. Cost effectiveness of the 1995 model energy code in Massachusetts

    SciTech Connect

    Lucas, R.G.

    1996-02-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1995 Model Energy Code (MEC) building thermal-envelope requirements for single-family houses and multifamily housing units in Massachusetts. The goal was to compare the cost effectiveness of the 1995 MEC to the energy conservation requirements of the Massachusetts State Building Code-based on a comparison of the costs and benefits associated with complying with each.. This comparison was performed for three cities representing three geographical regions of Massachusetts--Boston, Worcester, and Pittsfield. The analysis was done for two different scenarios: a ``move-up`` home buyer purchasing a single-family house and a ``first-time`` financially limited home buyer purchasing a multifamily condominium unit. Natural gas, oil, and electric resistance heating were examined. The Massachusetts state code has much more stringent requirements if electric resistance heating is used rather than other heating fuels and/or equipment types. The MEC requirements do not vary by fuel type. For single-family homes, the 1995 MEC has requirements that are more energy-efficient than the non-electric resistance requirements of the current state code. For multifamily housing, the 1995 MEC has requirements that are approximately equally energy-efficient to the non-electric resistance requirements of the current state code. The 1995 MEC is generally not more stringent than the electric resistance requirements of the state code, in fact; for multifamily buildings the 1995 MEC is much less stringent.

  14. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  15. Multisynaptic activity in a pyramidal neuron model and neural code.

    PubMed

    Ventriglia, Francesco; Di Maio, Vito

    2006-01-01

    The highly irregular firing of mammalian cortical pyramidal neurons is one of the most striking observation of the brain activity. This result affects greatly the discussion on the neural code, i.e. how the brain codes information transmitted along the different cortical stages. In fact it seems to be in favor of one of the two main hypotheses about this issue, named the rate code. But the supporters of the contrasting hypothesis, the temporal code, consider this evidence inconclusive. We discuss here a leaky integrate-and-fire model of a hippocampal pyramidal neuron intended to be biologically sound to investigate the genesis of the irregular pyramidal firing and to give useful information about the coding problem. To this aim, the complete set of excitatory and inhibitory synapses impinging on such a neuron has been taken into account. The firing activity of the neuron model has been studied by computer simulation both in basic conditions and allowing brief periods of over-stimulation in specific regions of its synaptic constellation. Our results show neuronal firing conditions similar to those observed in experimental investigations on pyramidal cortical neurons. In particular, the variation coefficient (CV) computed from the inter-spike intervals (ISIs) in our simulations for basic conditions is close to the unity as that computed from experimental data. Our simulation shows also different behaviors in firing sequences for different frequencies of stimulation.

  16. Strontium Adsorption and Desorption Reactions in Model Drinking Water Distribution Systems

    DTIC Science & Technology

    2014-02-04

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 11-04-2014 Journal Article Strontium adsorption and desorption reactions in model... strontium (Sr2+) adsorption to and desorption from iron corrosion products were examined in two model drinking water distribution systems (DWDS...used to control Sr2; desorption. calcium carbonate; drinking water distribution system; α-FeOOH; iron; strontium ; XANES Unclassified

  17. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  18. Global Microscopic Models for Nuclear Reaction Calculations

    SciTech Connect

    Goriely, S.

    2005-05-24

    Important effort has been devoted in the last decades to measuring reaction cross sections. Despite such effort, many nuclear applications still require the use of theoretical predictions to estimate experimentally unknown cross sections. Most of the nuclear ingredients in the calculations of reaction cross sections need to be extrapolated in an energy and/or mass domain out of reach of laboratory simulations. In addition, some applications often involve a large number of unstable nuclei, so that only global approaches can be used. For these reasons, when the nuclear ingredients to the reaction models cannot be determined from experimental data, it is highly recommended to consider preferentially microscopic or semi-microscopic global predictions based on sound and reliable nuclear models which, in turn, can compete with more phenomenological highly-parameterized models in the reproduction of experimental data. The latest developments made in deriving such microscopic models for practical applications are reviewed. It mainly concerns nuclear structure properties (masses, deformations, radii, etc.), level densities at the equilibrium deformation, {gamma}-ray strength, as well as fission barriers and level densities at the fission saddle points.

  19. A bio-inspired sensor coupled with a bio-bar code and hybridization chain reaction for Hg(2+) assay.

    PubMed

    Xu, Huifeng; Zhu, Xi; Ye, Hongzhi; Yu, Lishuang; Chen, Guonan; Chi, Yuwu; Liu, Xianxiang

    2015-10-18

    In this article, a bio-inspired DNA sensor is developed, which is coupled with a bio-bar code and hybridization chain reaction. This bio-inspired sensor has a high sensitivity toward Hg(2+), and has been used to assay Hg(2+) in the extraction of Bauhinia championi with good satisfaction.

  20. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.926b Section 200.926b Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  1. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.925c Section 200.925c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  2. 49 CFR 41.120 - Acceptable model codes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Transportation Office of the Secretary of Transportation SEISMIC SAFETY § 41.120 Acceptable model codes. (a) This section describes the standards that must be used to meet the seismic design and construction requirements... seismic safety substantially equivalent to that provided by use of the 1988 National Earthquake...

  3. 49 CFR 41.120 - Acceptable model codes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Transportation Office of the Secretary of Transportation SEISMIC SAFETY § 41.120 Acceptable model codes. (a) This section describes the standards that must be used to meet the seismic design and construction requirements... seismic safety substantially equivalent to that provided by use of the 1988 National Earthquake...

  4. 49 CFR 41.120 - Acceptable model codes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Transportation Office of the Secretary of Transportation SEISMIC SAFETY § 41.120 Acceptable model codes. (a) This section describes the standards that must be used to meet the seismic design and construction requirements... seismic safety substantially equivalent to that provided by use of the 1988 National Earthquake...

  5. Modeling of the EAST ICRF antenna with ICANT Code

    SciTech Connect

    Qin Chengming; Zhao Yanping; Colas, L.; Heuraux, S.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  6. Dual Cauchy rate-distortion model for video coding

    NASA Astrophysics Data System (ADS)

    Zeng, Huanqiang; Chen, Jing; Cai, Canhui

    2014-07-01

    A dual Cauchy rate-distortion model is proposed for video coding. In our approach, the coefficient distribution of the integer transform is first studied. Then, based on the observation that the rate-distortion model of the luminance and that of the chrominance can be well expressed by separate Cauchy functions, a dual Cauchy rate-distortion model is presented. Furthermore, the simplified rate-distortion formulas are deduced to reduce the computational complexity of the proposed model without losing the accuracy. Experimental results have shown that the proposed model is better able to approximate the actual rate-distortion curve for various sequences with different motion activities.

  7. Development of a fan model for the CONTAIN code

    SciTech Connect

    Pevey, R.E.

    1987-01-08

    A fan model has been added to the CONTAIN code with a minimum of disruption of the standard CONTAIN calculation sequence. The user is required to supply a simple pressure vs. flow rate curve for each fan in his model configuration. Inclusion of the fan model required modification to two CONTAIN subroutines, IFLOW and EXEQNX. The two modified routines and the resulting executable module are located on the LANL mass storage system as /560007/iflow, /560007/exeqnx, and /560007/cont01, respectively. The model has been initially validated using a very simple sample problem and is ready for a more complete workout using the SRP reactor models from the RSRD probabilistic risk analysis.

  8. Model representation in the PANCOR wall interference assessment code

    NASA Technical Reports Server (NTRS)

    Al-Saadi, Jassim A.

    1991-01-01

    An investigation into the aircraft model description requirements of a wall interference assessment and correction code known as PANCOR was conducted. The accuracy necessary in specifying various elements of the model description were defined. It was found that the specified lift coefficient is the most important model parameter in the wind tunnel simulation. An accurate specification of the model volume was also found to be important. Also developed was a partially automated technique for generating wing lift distributions that are required as input to PANCOR. An existing three dimensional transonic small disturbance code was modified to provide the necessary information. A group of auxiliary computer programs and procedures was developed to help generate the required input for PANCOR.

  9. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  10. Enhancements to the SSME transfer function modeling code

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  11. Reaction-contingency based bipartite Boolean modelling

    PubMed Central

    2013-01-01

    Background Intracellular signalling systems are highly complex, rendering mathematical modelling of large signalling networks infeasible or impractical. Boolean modelling provides one feasible approach to whole-network modelling, but at the cost of dequantification and decontextualisation of activation. That is, these models cannot distinguish between different downstream roles played by the same component activated in different contexts. Results Here, we address this with a bipartite Boolean modelling approach. Briefly, we use a state oriented approach with separate update rules based on reactions and contingencies. This approach retains contextual activation information and distinguishes distinct signals passing through a single component. Furthermore, we integrate this approach in the rxncon framework to support automatic model generation and iterative model definition and validation. We benchmark this method with the previously mapped MAP kinase network in yeast, showing that minor adjustments suffice to produce a functional network description. Conclusions Taken together, we (i) present a bipartite Boolean modelling approach that retains contextual activation information, (ii) provide software support for automatic model generation, visualisation and simulation, and (iii) demonstrate its use for iterative model generation and validation. PMID:23835289

  12. Chemical TOPAZ: Modifications to the heat transfer code TOPAZ: The addition of chemical reaction kinetics and chemical mixtures

    SciTech Connect

    Nichols, A.L. III.

    1990-06-07

    This is a report describing the modifications which have been made to the heat flow code TOPAZ to allow the inclusion of thermally controlled chemical kinetics. This report is broken into parts. The first part is an introduction to the general assumptions and theoretical underpinning that were used to develop the model. The second section describes the changes that have been implemented into the code. The third section is the users manual for the input for the code. The fourth section is a compilation of hints, common errors, and things to be aware of while you are getting started. The fifth section gives a sample problem using the new code. This manual addenda is written with the presumption that most readers are not fluent with chemical concepts. Therefore, we shall in this section endeavor to describe the requirements that must be met before chemistry can occur and how we have modeled the chemistry in the code.

  13. Using cryptology models for protecting PHP source code

    NASA Astrophysics Data System (ADS)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  14. Discovering binary codes for documents by learning deep generative models.

    PubMed

    Hinton, Geoffrey; Salakhutdinov, Ruslan

    2011-01-01

    We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document.

  15. No-Core Shell Model and Reactions

    SciTech Connect

    Navratil, P; Ormand, W E; Caurier, E; Bertulani, C

    2005-04-29

    There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+{sup 6}Li and {sup 6}He+p scattering as well as a calculation of the astrophysically important {sup 7}Be(p, {gamma}){sup 8}B S-factor.

  16. Theory and modeling of stereoselective organic reactions

    SciTech Connect

    Houk, K.N.; Paddon-Row, M.N.; Rondan, N.G.; Wu, Y.D.; Brown, F.K.; Spellmeyer, D.C.; Metz, J.T.; Li, Y.; Loncharich, R.J.

    1986-03-07

    Theoretical investigations of the transition structures of additions and cycloadditions reveal details about the geometrics of bond-forming processes that are not directly accessible by experiment. The conformational analysis of transition states has been developed from theoretical generalizations about the preferred angle of attack by reagents on multiple bonds and predictions of conformations with respect to partially formed bonds. Qualitative rules for the prediction of the stereochemistries of organic reactions have been devised, and semi-empirical computational models have also been developed to predict the stereoselectivities of reactions of large organic molecules, such as nucleophilic additions to carbonyls, electrophilic hydroborations and cycloadditions, and intramolecular radical additions and cycloadditions. 52 references, 7 figures.

  17. No-Core Shell Model and Reactions

    SciTech Connect

    Navratil, Petr; Ormand, W. Erich; Caurier, Etienne; Bertulani, Carlos

    2005-10-14

    There has been a significant progress in ab initio approaches to the structure of light nuclei. Starting from realistic two- and three-nucleon interactions the ab initio no-core shell model (NCSM) can predict low-lying levels in p-shell nuclei. It is a challenging task to extend ab initio methods to describe nuclear reactions. In this contribution, we present a brief overview of the NCSM with examples of recent applications as well as the first steps taken toward nuclear reaction applications. In particular, we discuss cross section calculations of p+6Li and 6He+p scattering as well as a calculation of the astrophysically important 7Be(p,{gamma})8B S-factor.

  18. Cracking the Dual Code: Toward a Unitary Model of Phoneme Identification

    PubMed Central

    Foss, Donald J.; Gernsbacher, Morton Ann

    2014-01-01

    The results of five experiments on the nature of the speech code and on the role of sentence context on speech processing are reported. The first three studies test predictions from the dual code model of phoneme identification (Foss, D. J., & Blank, M. A. Cognitive Psychology, 1980, 12, 1–31). According to that model, subjects in a phoneme monitoring experiment respond to a prelexical code when engaged in a relatively easy task, and to a postlexical code when the task is difficult. The experiments controlled ease of processing either by giving subjects multiple targets for which to monitor or by preceding the target with a similar-sounding phoneme that draws false alarms. The predictions from the model were not sustained. Furthermore, evidence for a paradoxical nonword superiority effect was observed. In Experiment IV reaction times (RTs) to all possible /d/-initial CVCs were gathered. RTs were unaffected by the target item's status as a word or nonword. but they were affected by the internal phonetic structure of the target-bearing item. Vowel duration correlated highly (0.627) with RTs. Experiment V examined previous work purporting to demonstrate that semantic predictability affects how the speech code is processed, in particular that semantic predictability leads to responses based upon a postlexical code. That study found “predictability” effects when words occurred in isolation; further, it found that vowel duration and other phonetic factors can account parsimoniously for the existing results. These factors also account for the apparent nonword superiority effects observed earlier. Implications of the present work for theoretical models that stress the interaction between semantic context and speech processing are discussed, as are implications for use of the phoneme monitoring task. PMID:25520528

  19. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  20. Improvement of Basic Fluid Dynamics Models for the COMPASS Code

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  1. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  2. Plutonium explosive dispersal modeling using the MACCS2 computer code

    SciTech Connect

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  3. New Mechanical Model for the Transmutation Fuel Performance Code

    SciTech Connect

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  4. Exact energy conservation in hybrid meshless model/code

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.

    2008-11-01

    Energy conservation is an important issue for both PIC and hybrid models. In hybrid codes the ions are treated kinetically and the electrons are described as a massless charge-neutralizing fluid. Our recently developed Particle-In-Cloud-Of-Points (PICOP) approach [1], which uses an adaptive meshless technique to compute electromagnetic fields on a cloud of computational points, is applied to a hybrid model. An exact energy conservation numerical scheme, which describes the interaction between geometrical space, where the electromagnetic fields are computed, and particle/velocity space, is presented. Having being utilized in a new PICOP hybrid code, the algorithm had demonstrated accurate energy conservation in the numerical simulation of two counter streaming plasma beams instability. [1] S. A. Galkin, B. P. Cluggish, J. S. Kim, S. Yu. Medvedev ``Advansed PICOP Algorithm with Adaptive Meshless Field Solver'', Published in the IEEE PPPS/ICOP 2007 Conference proceedings, pp. 1445-1448, Albuquerque, New Mexico, June 17-22, 2007.

  5. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-12-09

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{_}summary.html.

  6. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-03-15

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse 'slice' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{sub s}ummary.html.

  7. Photochemical reactions of various model protocell systems

    NASA Technical Reports Server (NTRS)

    Folsome, C. E.

    1986-01-01

    Models for the emergence of cellular life on the primitive Earth, and for physical environments of that era have been studied that embody these assumptions: (1) pregenetic cellular forms were phase-bounded systems primarily photosynthetic in nature, and (2) the early Earth environment was anoxic (lacking appreciable amounts of free hydrogen). It was found that organic structures can also be formed under anoxic conditions (N2, CO3=, H2O) by protracted longwavelength UV radiation. Apparently these structures form initially as organic layers upon CaCO3 crystalloids. The question remains as to whether the UV photosynthetic ability of such phase bounded structures is a curiosity, or a general property of phase bounded systems which is of direct interest to the emergence of cellular life. The question of the requirement and sailient features of a phase boundary for UV photosynthetic abilities was addressed by searching for similar general physical properties which might be manifest in a variety of other simple protocell-like structures. Since it has been shown that laboratory protocell models can effect the UV photosynthesis of low molecular weight compounds, this reaction is being used as an assay to survey other types of structures for similar UV photosynthetic reactions. Various kinds of structures surveyed are: (1) proteinoids; (2) liposomes; (3) reconstituted cell membrane spheroids; (4) coacervates; and (5) model protocells formed under anoxic conditions.

  8. Modeling of transient dust events in fusion edge plasmas with DUSTT-UEDGE code

    NASA Astrophysics Data System (ADS)

    Smirnov, R. D.; Krasheninnikov, S. I.; Pigarov, A. Yu.; Rognlien, T. D.

    2016-10-01

    It is well known that dust can be produced in fusion devices due to various processes involving structural damage of plasma exposed materials. Recent computational and experimental studies have demonstrated that dust production and associated with it plasma contamination can present serious challenges in achieving sustained fusion reaction in future fusion devices, such as ITER. To analyze the impact, which dust can have on performance of fusion plasmas, modeling of coupled dust and plasma transport with DUSTT-UEDGE code is used by the authors. In past, only steady-state computational studies, presuming continuous source of dust influx, were performed due to iterative nature of DUSTT-UEDGE code coupling. However, experimental observations demonstrate that intermittent injection of large quantities of dust, often associated with transient plasma events, may severely impact fusion plasma conditions and even lead to discharge termination. In this work we report on progress in coupling of DUSTT-UEDGE codes in time-dependent regime, which allows modeling of transient dust-plasma transport processes. The methodology and details of the time-dependent code coupling, as well as examples of simulations of transient dust-plasma transport phenomena will be presented. These include time-dependent modeling of impact of short out-bursts of different quantities of tungsten dust in ITER divertor on the edge plasma parameters. The plasma response to the out-bursts with various duration, location, and ejected dust sizes will be analyzed.

  9. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  10. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  11. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  12. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    SciTech Connect

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  13. Implementing Subduction Models in the New Mantle Convection Code Aspect

    NASA Astrophysics Data System (ADS)

    Arredondo, Katrina; Billen, Magali

    2014-05-01

    The geodynamic community has utilized various numerical modeling codes as scientific questions arise and computer processing power increases. Citcom, a widely used mantle convection code, has limitations and vulnerabilities such as temperature overshoots of hundreds or thousands degrees Kelvin (i.e., Kommu et al., 2013). Recently Aspect intended as a more powerful cousin, is in active development with additions such as Adaptable Mesh Refinement (AMR) and improved solvers (Kronbichler et al., 2012). The validity and ease of use of Aspect is important to its survival and role as a possible upgrade and replacement to Citcom. Development of publishable models illustrates the capacity of Aspect. We present work on the addition of non-linear solvers and stress-dependent rheology to Aspect. With a solid foundational knowledge of C++, these additions were easily added into Aspect and tested against CitcomS. Time-dependent subduction models akin to those in Billen and Hirth (2007) are built and compared in CitcomS and Aspect. Comparison with CitcomS assists in Aspect development and showcases its flexibility, usability and capabilities. References: Billen, M. I., and G. Hirth, 2007. Rheologic controls on slab dynamics. Geochemistry, Geophysics, Geosystems. Kommu, R., E. Heien, L. H. Kellogg, W. Bangerth, T. Heister, E. Studley, 2013. The Overshoot Phenomenon in Geodynamics Codes. American Geophysical Union Fall Meeting. M. Kronbichler, T. Heister, W. Bangerth, 2012, High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophys. J. Int.

  14. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.

  15. Leveraging modeling approaches: reaction networks and rules.

    PubMed

    Blinov, Michael L; Moraru, Ion I

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.

  16. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  17. Modelling of LOCA Tests with the BISON Fuel Performance Code

    SciTech Connect

    Williamson, Richard L; Pastore, Giovanni; Novascone, Stephen Rhead; Spencer, Benjamin Whiting; Hales, Jason Dean

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  18. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  19. Geochemical controls on shale groundwaters: Results of reaction path modeling

    SciTech Connect

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs.

  20. A Mutation Model from First Principles of the Genetic Code.

    PubMed

    Thorvaldsen, Steinar

    2016-01-01

    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  1. Development and Validation of Reaction Wheel Disturbance Models: Empirical Model

    NASA Astrophysics Data System (ADS)

    Masterson, R. A.; Miller, D. W.; Grogan, R. L.

    2002-01-01

    Accurate disturbance models are necessary to predict the effects of vibrations on the performance of precision space-based telescopes, such as the Space Interferometry Mission (SIM). There are many possible disturbance sources on such spacecraft, but mechanical jitter from the reaction wheel assembly (RWA) is anticipated to be the largest. A method has been developed and implemented in the form of a MATLAB toolbox to extract parameters for an empirical disturbance model from RWA micro-vibration data. The disturbance model is based on one that was used to predict the vibration behaviour of the Hubble Space Telescope (HST) wheels and assumes that RWA disturbances consist of discrete harmonics of the wheel speed with amplitudes proportional to the wheel speed squared. The MATLAB toolbox allows the extension of this empirical disturbance model for application to any reaction wheel given steady state vibration data. The toolbox functions are useful for analyzing RWA vibration data, and the model provides a good estimate of the disturbances over most wheel speeds. However, it is shown that the disturbances are under-predicted by a model of this form over some wheel speed ranges. The poor correlation is due to the fact that the empirical model does not account for disturbance amplifications caused by interactions between the harmonics and the structural modes of the wheel. Experimental data from an ITHACO Space Systems E-type reaction wheel are used to illustrate the model development and validation process.

  2. A model code for the radiative theta pinch

    SciTech Connect

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-15

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  3. New Direction in Hydrogeochemical Transport Modeling: Incorporating Multiple Kinetic and Equilibrium Reaction Pathways

    SciTech Connect

    Steefel, C.I.

    2000-02-02

    At least two distinct kinds of hydrogeochemical models have evolved historically for use in analyzing contaminant transport, but each has important limitations. One kind, focusing on organic contaminants, treats biodegradation reactions as parts of relatively simple kinetic reaction networks with no or limited coupling to aqueous and surface complexation and mineral dissolution/precipitation reactions. A second kind, evolving out of the speciation and reaction path codes, is capable of handling a comprehensive suite of multicomponent complexation (aqueous and surface) and mineral precipitation and dissolution reactions, but has not been able to treat reaction networks characterized by partial redox disequilibrium and multiple kinetic pathways. More recently, various investigators have begun to consider biodegradation reactions in the context of comprehensive equilibrium and kinetic reaction networks (e.g. Hunter et al. 1998, Mayer 1999). Here we explore two examples of multiple equilibrium and kinetic reaction pathways using the reactive transport code GIMRT98 (Steefel, in prep.): (1) a computational example involving the generation of acid mine drainage due to oxidation of pyrite, and (2) a computational/field example where the rates of chlorinated VOC degradation are linked to the rates of major redox processes occurring in organic-rich wetland sediments overlying a contaminated aerobic aquifer.

  4. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    SciTech Connect

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-07-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  5. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  6. Physics models in the toroidal transport code PROCTR

    SciTech Connect

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  7. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  8. Model Experiment of Thermal Runaway Reactions Using the Aluminum-Hydrochloric Acid Reaction

    ERIC Educational Resources Information Center

    Kitabayashi, Suguru; Nakano, Masayoshi; Nishikawa, Kazuyuki; Koga, Nobuyoshi

    2016-01-01

    A laboratory exercise for the education of students about thermal runaway reactions based on the reaction between aluminum and hydrochloric acid as a model reaction is proposed. In the introductory part of the exercise, the induction period and subsequent thermal runaway behavior are evaluated via a simple observation of hydrogen gas evolution and…

  9. A hydrodynamics-reaction kinetics coupled model for evaluating bioreactors derived from CFD simulation.

    PubMed

    Wang, Xu; Ding, Jie; Guo, Wan-Qian; Ren, Nan-Qi

    2010-12-01

    Investigating how a bioreactor functions is a necessary precursor for successful reactor design and operation. Traditional methods used to investigate flow-field cannot meet this challenge accurately and economically. Hydrodynamics model can solve this problem, but to understand a bioreactor in sufficient depth, it is often insufficient. In this paper, a coupled hydrodynamics-reaction kinetics model was formulated from computational fluid dynamics (CFD) code to simulate a gas-liquid-solid three-phase biotreatment system for the first time. The hydrodynamics model is used to formulate prediction of the flow field and the reaction kinetics model then portrays the reaction conversion process. The coupled model is verified and used to simulate the behavior of an expanded granular sludge bed (EGSB) reactor for biohydrogen production. The flow patterns were visualized and analyzed. The coupled model also demonstrates a qualitative relationship between hydrodynamics and biohydrogen production. The advantages and limitations of applying this coupled model are discussed.

  10. Multiphoton dissociation and thermal unimolecular reactions induced by infrared lasers. [REAMPA code

    SciTech Connect

    Dai, H.L.

    1981-04-01

    Multiphoton dissociation (MPD) of ethyl chloride was studied using a tunable 3.3 ..mu..m laser to excite CH stretches. The absorbed energy increases almost linearly with fluence, while for 10 ..mu..m excitation there is substantial saturation. Much higher dissociation yields were observed for 3.3 ..mu..m excitation than for 10 ..mu..m excitation, reflecting bottlenecking in the discrete region of 10 ..mu..m excitation. The resonant nature of the excitation allows the rate equations description for transitions in the quasicontinuum and continuum to be extended to the discrete levels. Absorption cross sections are estimated from ordinary ir spectra. A set of cross sections which is constant or slowly decreasing with increasing vibrational excitation gives good fits to both absorption and dissociation yield data. The rate equations model was also used to quantitatively calculate the pressure dependence of the MPD yield of SF/sub 6/ caused by vibrational self-quenching. Between 1000-3000 cm/sup -1/ of energy is removed from SF/sub 6/ excited to approx. > 60 kcal/mole by collision with a cold SF/sub 6/ molecule at gas kinetic rate. Calculation showed the fluence dependence of dissociation varies strongly with the gas pressure. Infrared multiphoton excitation was applied to study thermal unimolecular reactions. With SiF/sub 4/ as absorbing gas for the CO/sub 2/ laser pulse, transient high temperature pulses were generated in a gas mixture. IR fluorescence from the medium reflected the decay of the temperature. The activation energy and the preexponential factor of the reactant dissociation were obtained from a phenomenological model calculation. Results are presented in detail. (WHK)

  11. Acoustic Gravity Wave Chemistry Model for the RAYTRACE Code.

    DTIC Science & Technology

    2014-09-26

    AU)-AI56 850 ACOlUSTIC GRAVITY WAVE CHEMISTRY MODEL FOR THE IAYTRACE I/~ CODE(U) MISSION RESEARCH CORP SANTA BARBIARA CA T E OLD Of MAN 84 MC-N-SlS...DNA-TN-S4-127 ONAOOI-BO-C-0022 UNLSSIFIlED F/O 20/14 NL 1-0 2-8 1111 po 312.2 1--I 11111* i •. AD-A 156 850 DNA-TR-84-127 ACOUSTIC GRAVITY WAVE...Hicih Frequency Radio Propaoation Acoustic Gravity Waves 20. ABSTRACT (Continue en reveree mide if tteceeemr and Identify by block number) This

  12. Computer codes for the evaluation of thermodynamic properties, transport properties, and equilibrium constants of an 11-species air model

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1990-01-01

    The computer codes developed provide data to 30000 K for the thermodynamic and transport properties of individual species and reaction rates for the prominent reactions occurring in an 11-species nonequilibrium air model. These properties and the reaction-rate data are computed through the use of curve-fit relations which are functions of temperature (and number density for the equilibrium constant). The curve fits were made using the most accurate data believed available. A detailed review and discussion of the sources and accuracy of the curve-fitted data used herein are given in NASA RP 1232.

  13. The Overlap Model: A Model of Letter Position Coding

    ERIC Educational Resources Information Center

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…

  14. Building process knowledge using inline spectroscopy, reaction calorimetry and reaction modeling--the integrated approach.

    PubMed

    Tummala, Srinivas; Shabaker, John W; Leung, Simon S W

    2005-11-01

    For over two decades, reaction engineering tools and techniques such as reaction calorimetry, inline spectroscopy and, to a more limited extent, reaction modeling, have been employed within the pharmaceutical industry to ensure safe and robust scale-up of organic reactions. Although each of these techniques has had a significant impact on the landscape of process development, an effective integrated approach is now being realized that combines calorimetry and spectroscopy with predictive modeling tools. This paper reviews some recent advances in the use of these reaction engineering tools in process development within the pharmaceutical industry and discusses their potential impact on the effective application of the integrated approach.

  15. Model for reaction kinetics in pyrolysis of wood

    SciTech Connect

    Ahuja, P.; Singh, P.C.; Upadhyay, S.N.; Kumar, S.

    1996-12-31

    A reaction model for the pyrolysis of small and large particles of wood Is developed. The chemical reactions that take place when biomass is pyrolyzed are the devolatilization reactions (primary) and due to the vapour-solid interactions (secondary). In the case of small particles, when the volatiles are immediately removed by the purge gas, only primary reactions occur and the reaction model is described by weight loss and char forming reactions. The of heterogeneous secondary reactions occur in the case of large particles due to the interaction between the volatiles and the hot nascent primary char. A chain reaction mechanism of secondary char formation is proposed. The model takes both the volatiles retention time and cracking and repolymerization reactions of the vapours with the decomposing solid as well as autocatalysis into consideration. 7 refs., 3 figs., 2 tabs.

  16. Data Evaluation Acquired Talys 1.0 Code to Produce 111In from Various Accelerator-Based Reactions

    NASA Astrophysics Data System (ADS)

    Alipoor, Zahra; Gholamzadeh, Zohreh; Sadeghi, Mahdi; Seyyedi, Solaleh; Aref, Morteza

    The Indium-111 physical-decay parameters as a β-emitter radionuclide show some potential for radiodiagnostic and radiotherapeutic purposes. Medical investigators have shown that 111In is an important radionuclide for locating and imaging certain tumors, visualization of the lymphatic system and thousands of labeling reactions have been suggested. The TALYS 1.0 code was used here to calculate excitation functions of 112/114-118Sn+p, 110Cd+3He, 109Ag+3He, 111-114Cd+p, 110/111Cd+d, 109Ag+α to produce 111In using low and medium energy accelerators. Calculations were performed up to 200 MeV. Appropriate target thicknesses have been assumed based on energy loss calculations with the SRIM code. Theoretical integral yields for all the latter reactions were calculated. The TALYS 1.0 code predicts that the production of a few curies of 111In is feasible using a target of isotopically highly enriched 112Cd and a proton energy between 12 and 25 MeV with a production rate as 248.97 MBq·μA-1 · h-1. Minimum impurities shall be produced during the proton irradiation of an enriched 111Cd target yielding a production rate for 111In of 67.52 MBq· μA-1 · h-1.

  17. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...

  18. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 12 2013-01-01 2013-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...

  19. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 12 2012-01-01 2012-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...

  20. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...

  1. Development and application of a numerical model of kinetic and equilibrium microbiological and geochemical reactions (BIOKEMOD)

    NASA Astrophysics Data System (ADS)

    Salvage, Karen M.; Yeh, Gour-Tsyh

    1998-08-01

    This paper presents the conceptual and mathematical development of the numerical model titled BIOKEMOD, and verification simulations performed using the model. BIOKEMOD is a general computer model for simulation of geochemical and microbiological reactions in batch aqueous solutions. BIOKEMOD may be coupled with hydrologic transport codes for simulation of chemically and biologically reactive transport. The chemical systems simulated may include any mixture of kinetic and equilibrium reactions. The pH, pe, and ionic strength may be specified or simulated. Chemical processes included are aqueous complexation, adsorption, ion-exchange and precipitation/dissolution. Microbiological reactions address growth of biomass and degradation of chemicals by microbial metabolism of substrates, nutrients, and electron acceptors. Inhibition or facilitation of growth due to the presence of specific chemicals and a lag period for microbial acclimation to new substrates may be simulated if significant in the system of interest. Chemical reactions controlled by equilibrium are solved using the law of mass action relating the thermodynamic equilibrium constant to the activities of the products and reactants. Kinetic chemical reactions are solved using reaction rate equations based on collision theory. Microbiologically mediated reactions for substrate removal and biomass growth are assumed to follow Monod kinetics modified for the potentially limiting effects of substrate, nutrient, and electron acceptor availability. BIOKEMOD solves the ordinary differential and algebraic equations of mixed geochemical and biogeochemical reactions using the Newton-Raphson method with full matrix pivoting. Simulations may be either steady state or transient. Input to the program includes the stoichiometry and parameters describing the relevant chemical and microbiological reactions, initial conditions, and sources/sinks for each chemical species. Output includes the chemical and biomass concentrations

  2. Mixing models for the two-way-coupling of CFD codes and zero-dimensional multi-zone codes to model HCCI combustion

    SciTech Connect

    Barths, H.; Felsch, C.; Peters, N.

    2009-01-15

    The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)

  3. Mixing models for the two-way-coupling of CFD codes and zero-dimensional multi-zone codes to model HCCI combustion

    SciTech Connect

    Barths, H.; Felsch, C.; Peters, N.

    2008-11-15

    The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)

  4. A simple model of optimal population coding for sensory systems.

    PubMed

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  5. A general paradigm to model reaction-based biogeochemical processes in batch systems

    NASA Astrophysics Data System (ADS)

    Fang, Yilin; Yeh, Gour-Tsyh; Burgos, William D.

    2003-04-01

    This paper presents the development and illustration of a numerical model of reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions. The objective is to provide a general paradigm for modeling reactive chemicals in batch systems, with expectations that it is applicable to reactive chemical transport problems. The unique aspects of the paradigm are to simultaneously (1) facilitate the segregation (isolation) of linearly independent kinetic reactions and thus enable the formulation and parameterization of individual rates one reaction by one reaction when linearly dependent kinetic reactions are absent, (2) enable the inclusion of virtually any type of equilibrium expressions and kinetic rates users want to specify, (3) reduce problem stiffness by eliminating all fast reactions from the set of ordinary differential equations governing the evolution of kinetic variables, (4) perform systematic operations to remove redundant fast reactions and irrelevant kinetic reactions, (5) systematically define chemical components and explicitly enforce mass conservation, (6) accomplish automation in decoupling fast reactions from slow reactions, and (7) increase the robustness of numerical integration of the governing equations with species switching schemes. None of the existing models to our knowledge has included these scopes simultaneously. This model (BIOGEOCHEM) is a general computer code to simulate biogeochemical processes in batch systems from a reaction-based mechanistic standpoint, and is designed to be easily coupled with transport models. To make the model applicable to a wide range of problems, programmed reaction types include aqueous complexation, adsorption-desorption, ion-exchange, oxidation-reduction, precipitation-dissolution, acid-base reactions, and microbial mediated reactions. In addition, user-specified reaction types can be programmed into the model. Any reaction can be treated as fast/equilibrium or slow

  6. Sodium spray and jet fire model development within the CONTAIN-LMR code

    SciTech Connect

    Scholtyssek, W.; Murata, K.K.

    1993-12-31

    An assessment was made of the sodium spray fire model implemented in the CONTAIN code. The original droplet burn model, which was based on the NACOM code, was improved in several aspects, especially concerning evaluation of the droplet burning rate, reaction chemistry and heat balance, spray geometry and droplet motion, and consistency with CONTAIN standards of gas property evaluation. An additional droplet burning model based on a proposal by Krolikowski was made available to include the effect of the chemical equilibrium conditions at the flame temperature. The models were validated against single-droplet burn experiments as well as spray and jet fire experiments. Reasonable agreement was found between the two burn models and experimental data. When the gas temperature in the burning compartment reaches high values, the Krolikowski model seems to be preferable. Critical parameters for spray fire evaluation were found to be the spray characterization, especially the droplet size, which largely determines the burning efficiency, and heat transfer conditions at the interface between the atmosphere and structures, which controls the thermal hydraulic behavior in the burn compartment.

  7. A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.

    PubMed

    Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L

    2016-11-05

    The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data.

  8. Modeling Planet-Building Stellar Disks with Radiative Transfer Code

    NASA Astrophysics Data System (ADS)

    Swearingen, Jeremy R.; Sitko, Michael L.; Whitney, Barbara; Grady, Carol A.; Wagner, Kevin Robert; Champney, Elizabeth H.; Johnson, Alexa N.; Warren, Chelsea C.; Russell, Ray W.; Hammel, Heidi B.; Lisse, Casey M.; Cure, Michel; Kraus, Stefan; Fukagawa, Misato; Calvet, Nuria; Espaillat, Catherine; Monnier, John D.; Millan-Gabet, Rafael; Wilner, David J.

    2015-01-01

    Understanding the nature of the many planetary systems found outside of our own solar system cannot be completed without knowledge of the beginnings these systems. By detecting planets in very young systems and modeling the disks of material around stars from which they form, we can gain a better understanding of planetary origin and evolution. The efforts presented here have been in modeling two pre-transitional disk systems using a radiative transfer code. With the first of these systems, V1247 Ori, a model that fits the spectral energy distribution (SED) well and whose parameters are consistent with existing interferometry data (Kraus et al 2013) has been achieved. The second of these two systems, SAO 206462, has presented a different set of challenges but encouraging SED agreement between the model and known data gives hope that the model can produce images that can be used in future interferometry work. This work was supported by NASA ADAP grant NNX09AC73G, and the IR&D program at The Aerospace Corporation.

  9. Verification of thermal analysis codes for modeling solid rocket nozzles

    NASA Technical Reports Server (NTRS)

    Keyhani, M.

    1993-01-01

    One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher

  10. CODE's new solar radiation pressure model for GNSS orbit determination

    NASA Astrophysics Data System (ADS)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  11. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  12. A Generalized Kinetic Model for Heterogeneous Gas-Solid Reactions

    SciTech Connect

    Xu, Zhijie; Sun, Xin; Khaleel, Mohammad A.

    2012-08-15

    We present a generalized kinetic model for gas-solid heterogeneous reactions taking place at the interface between two phases. The model studies the reaction kinetics by taking into account the reactions at the interface, as well as the transport process within the product layer. The standard unreacted shrinking core model relies on the assumption of quasi-static diffusion that results in a steady-state concentration profile of gas reactant in the product layer. By relaxing this assumption and resolving the entire problem, general solutions can be obtained for reaction kinetics, including the reaction front velocity and the conversion (volume fraction of reacted solid). The unreacted shrinking core model is shown to be accurate and in agreement with the generalized model for slow reaction (or fast diffusion), low concentration of gas reactant, and small solid size. Otherwise, a generalized kinetic model should be used.

  13. Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.

  14. The Local Planner's Role Under the Proposed Model Land Development Code

    ERIC Educational Resources Information Center

    Bosselman, Fred P.

    1975-01-01

    The American Law Institute's Proposed Model Land Development Code would revise basic enabling legislation for local land development planning. The code would contain guidelines for local plans that would include both long-range and short-range elements. (Author)

  15. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  16. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  17. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  18. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  19. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  20. Polymerization as a Model Chain Reaction

    ERIC Educational Resources Information Center

    Morton, Maurice

    1973-01-01

    Describes the features of the free radical, anionic, and cationic mechanisms of chain addition polymerization. Indicates that the nature of chain reactions can be best taught through the study of macromolecules. (CC)

  1. Modelling Chemical Reasoning to Predict and Invent Reactions.

    PubMed

    Segler, Marwin H S; Waller, Mark P

    2016-11-11

    The ability to reason beyond established knowledge allows organic chemists to solve synthetic problems and invent novel transformations. Herein, we propose a model that mimics chemical reasoning, and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180 000 randomly selected binary reactions. The data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-)discovering novel transformations (even including transition metal-catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph and because each single reaction prediction is typically achieved in a sub-second time frame, the model can be used as a high-throughput generator of reaction hypotheses for reaction discovery.

  2. Detailed reduction of reaction mechanisms for flame modeling

    NASA Technical Reports Server (NTRS)

    Wang, Hai; Frenklach, Michael

    1991-01-01

    A method for reduction of detailed chemical reaction mechanisms, introduced earlier for ignition system, was extended to laminar premixed flames. The reduction is based on testing the reaction and reaction-enthalpy rates of the 'full' reaction mechanism using a zero-dimensional model with the flame temperature profile as a constraint. The technique is demonstrated with numerical tests performed on the mechanism of methane combustion.

  3. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  4. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  5. ROBO: a model and a code for studying the interstellar medium

    SciTech Connect

    Grassi, T; Krstic, Predrag S; Merlin, E; Buonomo, U; Piovan, L; Chiosi, C

    2011-01-01

    We present robo, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large-scale structures in the cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium-based molecules, helium, and metals). New reaction rates for the charge transfer in H{sup +} and H{sub 2} collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates. In our model dust are formed and destroyed by several processes. The model accurately treats the cooling process, based on several physical mechanisms, and cooling functions recently reported in the literature. The model is applied to a wide range of the input parameters, and the results for important quantities describing the physical state of the gas and dust are presented. The results are organized in a database suited to the artificial neural networks (ANNs). Once trained, the ANNs yield the same results obtained by ROBO with great accuracy. We plan to develop ANNs suitably tailored for applications to NB-TSPH simulations of cosmological structures and/or galaxies.

  6. ROBO: a model and a code for studying the interstellar medium

    NASA Astrophysics Data System (ADS)

    Grassi, T.; Krstic, P.; Merlin, E.; Buonomo, U.; Piovan, L.; Chiosi, C.

    2011-09-01

    We present robo, a model and its companion code for the study of the interstellar medium (ISM). The aim is to provide an accurate description of the physical evolution of the ISM and to set the ground for an ancillary tool to be inserted in NBody-Tree-SPH (NB-TSPH) simulations of large-scale structures in the cosmological context or of the formation and evolution of individual galaxies. The ISM model consists of gas and dust. The gas chemical composition is regulated by a network of reactions that includes a large number of species (hydrogen and deuterium-based molecules, helium, and metals). New reaction rates for the charge transfer in H+ and H2 collisions are presented. The dust contains the standard mixture of carbonaceous grains (graphite grains and PAHs) and silicates. In our model dust are formed and destroyed by several processes. The model accurately treats the cooling process, based on several physical mechanisms, and cooling functions recently reported in the literature. The model is applied to a wide range of the input parameters, and the results for important quantities describing the physical state of the gas and dust are presented. The results are organized in a database suited to the artificial neural networks (ANNs). Once trained, the ANNs yield the same results obtained by ROBO with great accuracy. We plan to develop ANNs suitably tailored for applications to NB-TSPH simulations of cosmological structures and/or galaxies.

  7. Reaction chain modeling of denitrification reactions during a push-pull test.

    PubMed

    Boisson, A; de Anna, P; Bour, O; Le Borgne, T; Labasque, T; Aquilina, L

    2013-05-01

    Field quantitative estimation of reaction kinetics is required to enhance our understanding of biogeochemical reactions in aquifers. We extended the analytical solution developed by Haggerty et al. (1998) to model an entire 1st order reaction chain and estimate the kinetic parameters for each reaction step of the denitrification process. We then assessed the ability of this reaction chain to model biogeochemical reactions by comparing it with experimental results from a push-pull test in a fractured crystalline aquifer (Ploemeur, French Brittany). Nitrates were used as the reactive tracer, since denitrification involves the sequential reduction of nitrates to nitrogen gas through a chain reaction (NO3(-)→NO2(-)→NO→N2O→N2) under anaerobic conditions. The kinetics of nitrate consumption and by-product formation (NO2(-), N2O) during autotrophic denitrification were quantified by using a reactive tracer (NO3(-)) and a non-reactive tracer (Br(-)). The formation of reaction by-products (NO2(-), N2O, N2) has not been previously considered using a reaction chain approach. Comparison of Br(-) and NO3(-) breakthrough curves showed that 10% of the injected NO3(-) molar mass was transformed during the 12 h experiment (2% into NO2(-), 1% into N2O and the rest into N2 and NO). Similar results, but with slower kinetics, were obtained from laboratory experiments in reactors. The good agreement between the model and the field data shows that the complete denitrification process can be efficiently modeled as a sequence of first order reactions. The 1st order kinetics coefficients obtained through modeling were as follows: k1=0.023 h(-1), k2=0.59 h(-1), k3=16 h(-1), and k4=5.5 h(-1). A next step will be to assess the variability of field reactivity using the methodology developed for modeling push-pull tracer tests.

  8. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    SciTech Connect

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  9. A grid-based coulomb collision model for PIC codes

    SciTech Connect

    Jones, M.E.; Lemons, D.S.; Mason, R.J.; Thomas, V.A.; Winske, D.

    1996-01-01

    A new method is presented to model the intermediate regime between collisionless and Coulobm collision dominated plasmas in particle-in-cell codes. Collisional processes between particles of different species are treated throuqh the concept of a grid-based {open_quotes}collision field,{close_quotes} which can be particularly efficient for multi-dimensional applications. In this method, particles are scattered using a force which is determined from the moments of the distribution functions accumulated on the grid. The form of the force is such to reproduce themulti-fluid transport equations through the second (energy) moment. Collisions between particles of the same species require a separate treatment. For this, a Monte Carlo-like scattering method based on the Langevin equation is used. The details of both methods are presented, and their implementation in a new hybrid (particle ion, massless fluid electron) algorithm is described. Aspects of the collision model are illustrated through several one- and two-dimensional test problems as well as examples involving laser produced colliding plasmas.

  10. A New Approach to Model Pitch Perception Using Sparse Coding

    PubMed Central

    Furst, Miriam; Barak, Omri

    2017-01-01

    Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments. PMID:28099436

  11. Modeling the isotope effect in Walden inversion reactions

    NASA Astrophysics Data System (ADS)

    Schechter, Israel

    1991-05-01

    A simple model to explain the isotope effect in the Walden exchange reaction is suggested. It is developed in the spirit of the line-of-centers models, and considers a hard-sphere collision that transfers energy from the relative translation to the desired vibrational mode, as well as geometrical properties and steric requirements. This model reproduces the recently measured cross sections for the reactions of hydrogen with isotopic silanes and older measurements of the substitution reactions of tritium atoms with isotopic methanes. Unlike previously given explanations, this model explains the effect of the attacking atom as well as of the other participating atoms. The model provides also qualitative explanation of the measured relative yields and thresholds of CH 3T and CH 2TF from the reaction T + CH 3F. Predictions for isotope effects and cross sections of some unmeasured reactions are given.

  12. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  13. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  14. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  15. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  16. Effect of reactions in small eddies on biomass gasification with eddy dissipation concept - Sub-grid scale reaction model.

    PubMed

    Chen, Juhui; Yin, Weijie; Wang, Shuai; Meng, Cheng; Li, Jiuru; Qin, Bai; Yu, Guangbin

    2016-07-01

    Large-eddy simulation (LES) approach is used for gas turbulence, and eddy dissipation concept (EDC)-sub-grid scale (SGS) reaction model is employed for reactions in small eddies. The simulated gas molar fractions are in better agreement with experimental data with EDC-SGS reaction model. The effect of reactions in small eddies on biomass gasification is emphatically analyzed with EDC-SGS reaction model. The distributions of the SGS reaction rates which represent the reactions in small eddies with particles concentration and temperature are analyzed. The distributions of SGS reaction rates have the similar trend with those of total reactions rates and the values account for about 15% of the total reactions rates. The heterogeneous reaction rates with EDC-SGS reaction model are also improved during the biomass gasification process in bubbling fluidized bed.

  17. Cost effectiveness of the 1993 Model Energy Code in Colorado

    SciTech Connect

    Lucas, R.G.

    1995-06-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1993 Model Energy Code (MEC) building thermal-envelope requirements for single-family homes in Colorado. The goal of this analysis was to compare the cost effectiveness of the 1993 MEC to current construction practice in Colorado based on an objective methodology that determined the total life-cycle cost associated with complying with the 1993 MEC. This analysis was performed for the range of Colorado climates. The costs and benefits of complying with the 1993 NIEC were estimated from the consumer`s perspective. The time when the homeowner realizes net cash savings (net positive cash flow) for homes built in accordance with the 1993 MEC was estimated to vary from 0.9 year in Steamboat Springs to 2.4 years in Denver. Compliance with the 1993 MEC was estimated to increase first costs by $1190 to $2274, resulting in an incremental down payment increase of $119 to $227 (at 10% down). The net present value of all costs and benefits to the home buyer, accounting for the mortgage and taxes, varied from a savings of $1772 in Springfield to a savings of $6614 in Steamboat Springs. The ratio of benefits to costs ranged from 2.3 in Denver to 3.8 in Steamboat Springs.

  18. Modelling couplings between reaction, fluid flow and deformation: Kinetics

    NASA Astrophysics Data System (ADS)

    Malvoisin, Benjamin; Podladchikov, Yury Y.; Connolly, James A. D.

    2016-04-01

    Mineral assemblages out of equilibrium are commonly found in metamorphic rocks testifying of the critical role of kinetics for metamorphic reactions. As experimentally determined reaction rates in fluid-saturated systems generally indicate complete reaction in less than several years, i.e. several orders of magnitude faster than field-based estimates, metamorphic reaction kinetics are generally thought to be controlled by transport rather than by processes at the mineral surface. However, some geological processes like earthquakes or slow-slip events have shorter characteristic timescales, and transport processes can be intimately related to mineral surface processes. Therefore, it is important to take into account the kinetics of mineral surface processes for modelling fluid/rock interactions. Here, a model coupling reaction, fluid flow and deformation was improved by introducing a delay in the achievement of equilibrium. The classical formalism for dissolution/precipitation reactions was used to consider the influence of the distance from equilibrium and of temperature on the reaction rate, and a dependence on porosity was introduced to model evolution of reacting surface area during reaction. The fitting of experimental data for three reactions typically occurring in metamorphic systems (serpentine dehydration, muscovite dehydration and calcite decarbonation) indicates a systematic faster kinetics close from equilibrium on the dehydration side than on the hydration side. This effect is amplified through the porosity term in the reaction rate since porosity is formed during dehydration. Numerical modelling indicates that this difference in reaction rate close from equilibrium plays a key role in microtextures formation. The developed model can be used in a wide variety of geological systems where couplings between reaction, deformation and fluid flow have to be considered.

  19. Modeling Second-Order Chemical Reactions using Cellular Automata

    NASA Astrophysics Data System (ADS)

    Hunter, N. E.; Barton, C. C.; Seybold, P. G.; Rizki, M. M.

    2012-12-01

    Cellular automata (CA) are discrete, agent-based, dynamic, iterated, mathematical computational models used to describe complex physical, biological, and chemical systems. Unlike the more computationally demanding molecular dynamics and Monte Carlo approaches, which use "force fields" to model molecular interactions, CA models employ a set of local rules. The traditional approach for modeling chemical reactions is to solve a set of simultaneous differential rate equations to give deterministic outcomes. CA models yield statistical outcomes for a finite number of ingredients. The deterministic solutions appear as limiting cases for conditions such as a large number of ingredients or a finite number of ingredients and many trials. Here we present a 2-dimensional, probabilistic CA model of a second-order gas phase reaction A + B → C, using a MATLAB basis. Beginning with a random distribution of ingredients A and B, formation of C emerges as the system evolves. The reaction rate can be varied based on the probability of favorable collisions of the reagents A and B. The model permits visualization of the conversion of reagents to products, and allows one to plot concentration vs. time for A, B and C. We test hypothetical reaction conditions such as: limiting reagents, the effects of reaction probabilities, and reagent concentrations on the reaction kinetics. The deterministic solutions of the reactions emerge as statistical averages in the limit of the large number of cells in the array. Modeling results for dynamic processes in the atmosphere will be presented.

  20. An Analytical Model for BDS B1 Spreading Code Self-Interference Evaluation Considering NH Code Effects.

    PubMed

    Zhang, Xin; Zhan, Xingqun; Feng, Shaojun; Ochieng, Washington

    2017-03-23

    The short spreading code used by the BeiDou Navigation Satellite System (BDS) B1-I or GPS Coarse/Acquistiion (C/A) can cause aggregately undesirable cross-correlation between signals within each single constellation. This GPS-to-GPS or BDS-to-BDS correlation is referred to as self-interference. A GPS C/A code self-interference model is extended to propose a self-interference model for BDS B1, taking into account the unique feature of the B1-I signal transmitted by BDS medium Earth orbit (MEO) and inclined geosynchronous orbit (IGSO) satellites-an extra Neumann-Hoffmann (NH) code. Currently there is no analytical model for BDS self-interference and a simple three parameter analytical model is proposed. The model is developed by calculating the spectral separation coefficient (SSC), converting SSC to equivalent white noise power level, and then using this to calculate effective carrier-to-noise density ratio. Cyclostationarity embedded in the signal offers the proposed model additional accuracy in predicting B1-I self-interference. Hardware simulator data are used to validate the model. Software simulator data are used to show the impact of self-interference on a typical BDS receiver including the finding that self-interference effect is most significant when the differential Doppler between desired and undesired signal is zero. Simulation results show the aggregate noise caused by just two undesirable spreading codes on a single desirable signal could lift the receiver noise floor by 3.83 dB under extreme C/N₀ (carrier to noise density ratio) conditions (around 20 dB-Hz). This aggregate noise has the potential to increase code tracking standard deviation by 11.65 m under low C/N₀ (15-19 dB-Hz) conditions and should therefore, be avoided for high-sensitivity applications. Although the findings refer to Beidou system, the principle weakness of the short codes illuminated here are valid for other satellite navigation systems.

  1. Sodium/water pool-deposit bed model of the CONACS code. [LMFBR

    SciTech Connect

    Peak, R.D.

    1983-12-17

    A new Pool-Bed model of the CONACS (Containment Analysis Code System) code represents a major advance over the pool models of other containment analysis code (NABE code of France, CEDAN code of Japan and CACECO and CONTAIN codes of the United States). This new model advances pool-bed modeling because of the number of significant materials and processes which are included with appropriate rigor. This CONACS pool-bed model maintains material balances for eight chemical species (C, H/sub 2/O, Na, NaH, Na/sub 2/O, Na/sub 2/O/sub 2/, Na/sub 2/CO/sub 3/ and NaOH) that collect in the stationary liquid pool on the floor and in the desposit bed on the elevated shelf of the standard CONACS analysis cell.

  2. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python.

    PubMed

    Wils, Stefan; De Schutter, Erik

    2009-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code.

  3. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  4. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  5. Chemical reactions simulated by ground-water-quality models

    USGS Publications Warehouse

    Grove, David B.; Stollenwerk, Kenneth G.

    1987-01-01

    Recent literature concerning the modeling of chemical reactions during transport in ground water is examined with emphasis on sorption reactions. The theory of transport and reactions in porous media has been well documented. Numerous equations have been developed from this theory, to provide both continuous and sequential or multistep models, with the water phase considered for both mobile and immobile phases. Chemical reactions can be either equilibrium or non-equilibrium, and can be quantified in linear or non-linear mathematical forms. Non-equilibrium reactions can be separated into kinetic and diffusional rate-limiting mechanisms. Solutions to the equations are available by either analytical expressions or numerical techniques. Saturated and unsaturated batch, column, and field studies are discussed with one-dimensional, laboratory-column experiments predominating. A summary table is presented that references the various kinds of models studied and their applications in predicting chemical concentrations in ground waters.

  6. Abundances in Astrophysical Environments: Reaction Network Simulations with Reaction Rates from Many-nucleon Modeling

    NASA Astrophysics Data System (ADS)

    Amason, Charlee; Dreyfuss, Alison; Launey, Kristina; Draayer, Jerry

    2017-01-01

    We use the ab initio (first-principle) symmetry-adapted no-core shell model (SA-NCSM) to calculate reaction rates of significance to type I X-ray burst nucleosynthesis. We consider the 18O(p,γ)19F reaction, which may influence the production of fluorine, as well as the 16O(α,γ)20Ne reaction, which is key to understanding the production of heavier elements in the universe. Results are compared to those obtained in the no-core sympletic shell model (NCSpM) with a schematic interaction. We discuss how these reaction rates affect the relevant elemental abundances. We thank the NSF for supporting this work through the REU Site in Physics & Astronomy (NSF grant #1560212) at Louisiana State University. This work was also supported by the U.S. NSF (OCI-0904874, ACI -1516338) and the U.S. DOE (DE-SC0005248).

  7. Comparison of DSMC reaction models with QCT reaction rates for nitrogen

    NASA Astrophysics Data System (ADS)

    Wysong, Ingrid J.; Gimelshein, Sergey F.

    2016-11-01

    Four empirical models of chemical reactions extensively used in the direct simulation Monte Carlo method in the past are analyzed via comparison of temperature and vibrational level dependent equilibrium and non-equilibrium reaction rates with available classical trajectory and direct molecular simulations for nitrogen dissociation. The considered models are total collision energy, quantum kinetic, vibration-dissociation favoring, and weak vibrational bias. The weak vibrational bias model was found to provide good agreement with benchmark vibrationally-specific dissociation rates, while significant differences were observed for the others.

  8. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  9. Reading and a Diffusion Model Analysis of Reaction Time

    PubMed Central

    Naples, Adam; Katz, Leonard; Grigorenko, Elena L.

    2012-01-01

    Processing speed is associated with reading performance. However, the literature is not clear either on the definition of processing speed or on why and how it contributes to reading performance. In this study we demonstrated that processing speed, as measured by reaction time, is not a unitary construct. Using the diffusion model of two-choice reaction time, we assessed processing speed in a series of same-different reaction time tasks for letter and number strings. We demonstrated that the association between reaction time and reading performance is driven by processing speed for reading-related information, but not motor or sensory encoding speed. PMID:22612543

  10. A Film Depositional Model of Permeability for Mineral Reactions in Unsaturated Media.

    SciTech Connect

    Freedman, Vicky L.; Saripalli, Prasad; Bacon, Diana H.; Meyer, Philip D.

    2004-11-15

    A new modeling approach based on the biofilm models of Taylor et al. (1990, Water Resources Research, 26, 2153-2159) has been developed for modeling changes in porosity and permeability in saturated porous media and implemented in an inorganic reactive transport code. Application of the film depositional models to mineral precipitation and dissolution reactions requires that calculations of mineral films be dynamically changing as a function of time dependent reaction processes. Since calculations of film thicknesses do not consider mineral density, results show that the film porosity model does not adequately describe volumetric changes in the porous medium. These effects can be included in permeability calculations by coupling the film permeability models (Mualem and Childs and Collis-George) to a volumetric model that incorporates both mineral density and reactive surface area. Model simulations demonstrate that an important difference between the biofilm and mineral film models is in the translation of changes in mineral radii to changes in pore space. Including the effect of tortuosity on pore radii changes improves the performance of the Mualem permeability model for both precipitation and dissolution. Results from simulation of simultaneous dissolution and secondary mineral precipitation provides reasonable estimates of porosity and permeability. Moreover, a comparison of experimental and simulated data show that the model yields qualitatively reasonable results for permeability changes due to solid-aqueous phase reactions.

  11. A Kinetic Ladle Furnace Process Simulation Model: Effective Equilibrium Reaction Zone Model Using FactSage Macro Processing

    NASA Astrophysics Data System (ADS)

    Van Ende, Marie-Aline; Jung, In-Ho

    2017-02-01

    The ladle furnace (LF) is widely used in the secondary steelmaking process in particular for the de-sulfurization, alloying, and reheating of liquid steel prior to the casting process. The Effective Equilibrium Reaction Zone model using the FactSage macro processing code was applied to develop a kinetic LF process model. The slag/metal interactions, flux additions to slag, various metallic additions to steel, and arcing in the LF process were taken into account to describe the variations of chemistry and temperature of steel and slag. The LF operation data for several steel grades from different plants were accurately described using the present kinetic model.

  12. General Description of Fission Observables: GEF Model Code

    NASA Astrophysics Data System (ADS)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  13. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    SciTech Connect

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.; Mahadevan, Sankaran

    2015-09-01

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presented in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.

  14. Formal modeling of a system of chemical reactions under uncertainty.

    PubMed

    Ghosh, Krishnendu; Schlipf, John

    2014-10-01

    We describe a novel formalism representing a system of chemical reactions, with imprecise rates of reactions and concentrations of chemicals, and describe a model reduction method, pruning, based on the chemical properties. We present two algorithms, midpoint approximation and interval approximation, for construction of efficient model abstractions with uncertainty in data. We evaluate computational feasibility by posing queries in computation tree logic (CTL) on a prototype of extracellular-signal-regulated kinase (ERK) pathway.

  15. Turing patterns in a reaction-diffusion model with the Degn-Harrison reaction scheme

    NASA Astrophysics Data System (ADS)

    Li, Shanbing; Wu, Jianhua; Dong, Yaying

    2015-09-01

    In this paper, we consider a reaction-diffusion model with Degn-Harrison reaction scheme. Some fundamental analytic properties of nonconstant positive solutions are first investigated. We next study the stability of constant steady-state solution to both ODE and PDE models. Our result also indicates that if either the size of the reactor or the effective diffusion rate is large enough, then the system does not admit nonconstant positive solutions. Finally, we establish the global structure of steady-state bifurcations from simple eigenvalues by bifurcation theory and the local structure of the steady-state bifurcations from double eigenvalues by the techniques of space decomposition and implicit function theorem.

  16. The APS SASE FEL : modeling and code comparison.

    SciTech Connect

    Biedron, S. G.

    1999-04-20

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  17. Chemical Reaction and Flow Modeling in Fullerene and Nanotube Production

    NASA Technical Reports Server (NTRS)

    Scott, Carl D.; Farhat, Samir; Greendyke, Robert B.

    2004-01-01

    addresses modeling of the arc process for fullerene and carbon nanotube production using O-D, 1-D and 2-D fluid flow models. The third part addresses simulations of the pulsed laser ablation process using time-dependent techniques in 2-D, and a steady state 2-D simulation of a continuous laser ablation process. The fourth part addresses steady state modeling in O-D and 2-D of the HiPco process. In each of the simulations, there is a variety of simplifications that are made that enable one to concentrate on one aspect or another of the process. There are simplifications that can be made to the chemical reaction models , e.g. reduction in number of species by lumping some of them together in a representative species. Other simulations are carried out by eliminating the chemistry altogether in order to concentrate on the fluid dynamics. When solving problems with a large number of species in more than one spatial dimension, it is almost imperative that the problem be decoupled by solving for the fluid dynamics to find the fluid motion and temperature history of "particles" of fluid moving through a reactor. Then one can solve the chemical rate equations with complex chemistry following the temperature and pressure history. One difficulty is that often mixing with an ambient gas is involved. Therefore, one needs to take dilution and mixing into account. This changes the ratio of carbon species to background gas. Commercially available codes may have no provision for including dilution as part of the input. One must the write special solvers for including dilution in decoupled problems. The article addresses both ful1erene production and single-walled carbon nanotube (SWNT) production. There are at least two schemes or concepts of SWNT growth. This article will only address growth in the gas phase by carbon and catalyst cluster growth and SW T formation by the addition of carbon. There are other models that conceive of SWNT growth as a phase separation process from clusters me

  18. Code interoperability and standard data formats in quantum chemistry and quantum dynamics: The Q5/D5Cost data model.

    PubMed

    Rossi, Elda; Evangelisti, Stefano; Laganà, Antonio; Monari, Antonio; Rampino, Sergio; Verdicchio, Marco; Baldridge, Kim K; Bendazzoli, Gian Luigi; Borini, Stefano; Cimiraglia, Renzo; Angeli, Celestino; Kallay, Peter; Lüthi, Hans P; Ruud, Kenneth; Sanchez-Marin, José; Scemama, Anthony; Szalay, Peter G; Tajti, Attila

    2014-03-30

    Code interoperability and the search for domain-specific standard data formats represent critical issues in many areas of computational science. The advent of novel computing infrastructures such as computational grids and clouds make these issues even more urgent. The design and implementation of a common data format for quantum chemistry (QC) and quantum dynamics (QD) computer programs is discussed with reference to the research performed in the course of two Collaboration in Science and Technology Actions. The specific data models adopted, Q5Cost and D5Cost, are shown to work for a number of interoperating codes, regardless of the type and amount of information (small or large datasets) to be exchanged. The codes are either interfaced directly, or transfer data by means of wrappers; both types of data exchange are supported by the Q5/D5Cost library. Further, the exchange of data between QC and QD codes is addressed. As a proof of concept, the H + H2 reaction is discussed. The proposed scheme is shown to provide an excellent basis for cooperative code development, even across domain boundaries. Moreover, the scheme presented is found to be useful also as a production tool in the grid distributed computing environment.

  19. Modified version of the combined model of photonucleon reactions

    SciTech Connect

    Ishkhanov, B. S.; Orlin, V. N.

    2015-07-15

    A refined version of the combined photonucleon-reaction model is described. This version makes it possible to take into account the effect of structural features of the doorway dipole state on photonucleon reactions in the energy range of E{sub γ} ≤ 30 MeV. In relation to the previous version of the model, the treatment of isospin effects at the preequilibrium and evaporation reaction stages is refined; in addition, the description of the semidirect effect caused by nucleon emission from the doorway dipole state is improved. The model in question is used to study photonucleon reactions on the isotopes {sup 35-56}Ca and {sup 102-134}Sn in the energy range indicated above.

  20. Implicit solvation model for density-functional study of nanocrystal surfaces and reaction pathways

    NASA Astrophysics Data System (ADS)

    Mathew, Kiran; Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2014-02-01

    Solid-liquid interfaces are at the heart of many modern-day technologies and provide a challenge to many materials simulation methods. A realistic first-principles computational study of such systems entails the inclusion of solvent effects. In this work, we implement an implicit solvation model that has a firm theoretical foundation into the widely used density-functional code Vienna ab initio Software Package. The implicit solvation model follows the framework of joint density functional theory. We describe the framework, our algorithm and implementation, and benchmarks for small molecular systems. We apply the solvation model to study the surface energies of different facets of semiconducting and metallic nanocrystals and the SN2 reaction pathway. We find that solvation reduces the surface energies of the nanocrystals, especially for the semiconducting ones and increases the energy barrier of the SN2 reaction.

  1. Regimes of chemical reaction waves initiated by nonuniform initial conditions for detailed chemical reaction models.

    PubMed

    Liberman, M A; Kiverin, A D; Ivanov, M F

    2012-05-01

    Regimes of chemical reaction wave propagation initiated by initial temperature nonuniformity in gaseous mixtures, whose chemistry is governed by chain-branching kinetics, are studied using a multispecies transport model and a detailed chemical model. Possible regimes of reaction wave propagation are identified for stoichiometric hydrogen-oxygen and hydrogen-air mixtures in a wide range of initial pressures and temperature levels, depending on the initial non-uniformity steepness. The limits of the regimes of reaction wave propagation depend upon the values of the spontaneous wave speed and the characteristic velocities of the problem. It is shown that one-step kinetics cannot reproduce either quantitative neither qualitative features of the ignition process in real gaseous mixtures because the difference between the induction time and the time when the exothermic reaction begins significantly affects the ignition, evolution, and coupling of the spontaneous reaction wave and the pressure wave, especially at lower temperatures. We show that all the regimes initiated by the temperature gradient occur for much shallower temperature gradients than predicted by a one-step model. The difference is very large for lower initial pressures and for slowly reacting mixtures. In this way the paper provides an answer to questions, important in practice, about the ignition energy, its distribution, and the scale of the initial nonuniformity required for ignition in one or another regime of combustion wave propagation.

  2. Regimes of chemical reaction waves initiated by nonuniform initial conditions for detailed chemical reaction models

    NASA Astrophysics Data System (ADS)

    Liberman, M. A.; Kiverin, A. D.; Ivanov, M. F.

    2012-05-01

    Regimes of chemical reaction wave propagation initiated by initial temperature nonuniformity in gaseous mixtures, whose chemistry is governed by chain-branching kinetics, are studied using a multispecies transport model and a detailed chemical model. Possible regimes of reaction wave propagation are identified for stoichiometric hydrogen-oxygen and hydrogen-air mixtures in a wide range of initial pressures and temperature levels, depending on the initial non-uniformity steepness. The limits of the regimes of reaction wave propagation depend upon the values of the spontaneous wave speed and the characteristic velocities of the problem. It is shown that one-step kinetics cannot reproduce either quantitative neither qualitative features of the ignition process in real gaseous mixtures because the difference between the induction time and the time when the exothermic reaction begins significantly affects the ignition, evolution, and coupling of the spontaneous reaction wave and the pressure wave, especially at lower temperatures. We show that all the regimes initiated by the temperature gradient occur for much shallower temperature gradients than predicted by a one-step model. The difference is very large for lower initial pressures and for slowly reacting mixtures. In this way the paper provides an answer to questions, important in practice, about the ignition energy, its distribution, and the scale of the initial nonuniformity required for ignition in one or another regime of combustion wave propagation.

  3. The modeling of core melting and in-vessel corium relocation in the APRIL code

    SciTech Connect

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T.

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  4. Strong plasma screening in thermonuclear reactions: Electron drop model

    NASA Astrophysics Data System (ADS)

    Kravchuk, P. A.; Yakovlev, D. G.

    2014-01-01

    We analyze enhancement of thermonuclear fusion reactions due to strong plasma screening in dense matter using a simple electron drop model. In the model we assume fusion in a potential that is screened by an effective electron cloud around colliding nuclei (extended Salpeter ion-sphere model). We calculate the mean-field screened Coulomb potentials for atomic nuclei with equal and nonequal charges, appropriate astrophysical S factors, and enhancement factors of reaction rates. As a byproduct, we study the analytic behavior of the screening potential at small separations between the reactants. In this model, astrophysical S factors depend not only on nuclear physics but on plasma screening as well. The enhancement factors are in good agreement with calculations by other methods. This allows us to formulate a combined, pure analytic model of strong plasma screening in thermonuclear reactions. The results can be useful for simulating nuclear burning in white dwarfs and neutron stars.

  5. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  6. Molecular Detection of Methicillin-Resistant Staphylococcus aureus by Non-Protein Coding RNA-Mediated Monoplex Polymerase Chain Reaction

    PubMed Central

    Soo Yean, Cheryl Yeap; Selva Raju, Kishanraj; Xavier, Rathinam; Subramaniam, Sreeramanan; Gopinath, Subash C. B.; Chinni, Suresh V.

    2016-01-01

    Non-protein coding RNA (npcRNA) is a functional RNA molecule that is not translated into a protein. Bacterial npcRNAs are structurally diversified molecules, typically 50–200 nucleotides in length. They play a crucial physiological role in cellular networking, including stress responses, replication and bacterial virulence. In this study, by using an identified npcRNA gene (Sau-02) in Methicillin-resistant Staphylococcus aureus (MRSA), we identified the Gram-positive bacteria S. aureus. A Sau-02-mediated monoplex Polymerase Chain Reaction (PCR) assay was designed that displayed high sensitivity and specificity. Fourteen different bacteria and 18 S. aureus strains were tested, and the results showed that the Sau-02 gene is specific to S. aureus. The detection limit was tested against genomic DNA from MRSA and was found to be ~10 genome copies. Further, the detection was extended to whole-cell MRSA detection, and we reached the detection limit with two bacteria. The monoplex PCR assay demonstrated in this study is a novel detection method that can replicate other npcRNA-mediated detection assays. PMID:27367909

  7. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  8. Knockout reactions on p-shell nuclei for tests of structure and reaction models

    NASA Astrophysics Data System (ADS)

    Kuchera, A. N.; Bazin, D.; Babo, M.; Baumann, T.; Bowry, M.; Bradt, J.; Brown, J.; Deyoung, P. A.; Elman, B.; Finck, J. E.; Gade, A.; Grinyer, G. F.; Jones, M. D.; Lunderberg, E.; Redpath, T.; Rogers, W. F.; Stiefel, K.; Thoennessen, M.; Weisshaar, D.; Whitmore, K.

    2015-10-01

    A series of knockout reactions on p-shell nuclei were studied to extract exclusive cross sections and to investigate the neutron knockout mechanism. The measured cross sections provide stringent tests of shell model and ab initio calculations while measurements of neutron+residual coincidences test the accuracy and validity of reaction models used to predict cross sections. Six different beams ranging from A = 7 to 12 were produced at the NSCL totaling measurements of nine different reaction settings. The reaction settings were determined by the magnetic field of the Sweeper magnet which bends the residues into charged particle detectors. The reaction target was surrounded by the high efficiency CsI array, CAESAR, to tag gamma rays for cross section measurements of low-lying excited states. Additionally, knocked out neutrons were detected with MoNA-LISA in coincidence with the charged residuals. Preliminary results will be discussed. This work is partially supported by the National Science Foundation under Grant No. PHY11-02511 and the Department of Energy National Nuclear Security Administration under Award No. DE-NA0000979.

  9. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  10. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  11. A Lattice Boltzmann Model for Oscillating Reaction-Diffusion

    NASA Astrophysics Data System (ADS)

    Rodríguez-Romo, Suemi; Ibañez-Orozco, Oscar; Sosa-Herrera, Antonio

    2016-07-01

    A computational algorithm based on the lattice Boltzmann method (LBM) is proposed to model reaction-diffusion systems. In this paper, we focus on how nonlinear chemical oscillators like Belousov-Zhabotinsky (BZ) and the chlorite-iodide-malonic acid (CIMA) reactions can be modeled by LBM and provide with new insight into the nature and applications of oscillating reactions. We use Gaussian pulse initial concentrations of sulfuric acid in different places of a bidimensional reactor and nondiffusive boundary walls. We clearly show how these systems evolve to a chaotic attractor and produce specific pattern images that are portrayed in the reactions trajectory to the corresponding chaotic attractor and can be used in robotic control.

  12. Uncertainty quantification for quantum chemical models of complex reaction networks.

    PubMed

    Proppe, Jonny; Husch, Tamara; Simm, Gregor N; Reiher, Markus

    2016-12-22

    For the quantitative understanding of complex chemical reaction mechanisms, it is, in general, necessary to accurately determine the corresponding free energy surface and to solve the resulting continuous-time reaction rate equations for a continuous state space. For a general (complex) reaction network, it is computationally hard to fulfill these two requirements. However, it is possible to approximately address these challenges in a physically consistent way. On the one hand, it may be sufficient to consider approximate free energies if a reliable uncertainty measure can be provided. On the other hand, a highly resolved time evolution may not be necessary to still determine quantitative fluxes in a reaction network if one is interested in specific time scales. In this paper, we present discrete-time kinetic simulations in discrete state space taking free energy uncertainties into account. The method builds upon thermo-chemical data obtained from electronic structure calculations in a condensed-phase model. Our kinetic approach supports the analysis of general reaction networks spanning multiple time scales, which is here demonstrated for the example of the formose reaction. An important application of our approach is the detection of regions in a reaction network which require further investigation, given the uncertainties introduced by both approximate electronic structure methods and kinetic models. Such cases can then be studied in greater detail with more sophisticated first-principles calculations and kinetic simulations.

  13. A practical guide to modelling enzyme-catalysed reactions

    PubMed Central

    Lonsdale, Richard; Harvey, Jeremy N.; Mulholland, Adrian J.

    2012-01-01

    Molecular modelling and simulation methods are increasingly at the forefront of elucidating mechanisms of enzyme-catalysed reactions, and shedding light on the determinants of specificity and efficiency of catalysis. These methods have the potential to assist in drug discovery and the design of novel protein catalysts. This Tutorial Review highlights some of the most widely used modelling methods and some successful applications. Modelling protocols commonly applied in studying enzyme-catalysed reactions are outlined here, and some practical implications are considered, with cytochrome P450 enzymes used as a specific example. PMID:22278388

  14. A convolutional code-based sequence analysis model and its application.

    PubMed

    Liu, Xiao; Geng, Xiaoli

    2013-04-16

    A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.

  15. Computerized reduction of elementary reaction sets for CFD combustion modeling

    NASA Technical Reports Server (NTRS)

    Wikstrom, Carl V.

    1992-01-01

    Modeling of chemistry in Computational Fluid Dynamics can be the most time-consuming aspect of many applications. If the entire set of elementary reactions is to be solved, a set of stiff ordinary differential equations must be integrated. Some of the reactions take place at very high rates, requiring short time steps, while others take place more slowly and make little progress in the short time step integration.

  16. Simple Reaction Time and Statistical Facilitation: A Parallel Grains Model

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf

    2003-01-01

    A race-like model is developed to account for various phenomena arising in simple reaction time (RT) tasks. Within the model, each stimulus is represented by a number of grains of information or activation processed in parallel. The stimulus is detected when a criterion number of activated grains reaches a decision center. Using the concept of…

  17. An Investigation of Model Catalyzed Hydrocarbon Formation Reactions

    SciTech Connect

    Tysoe, W. T.

    2001-05-02

    Work was focused on two areas aimed at understanding the chemistry of realistic catalytic systems: (1) The synthesis and characterization of model supported olefin metathesis catalysts. (2) Understanding the role of the carbonaceous layer present on Pd(111) single crystal model catalysts during reaction.

  18. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    ERIC Educational Resources Information Center

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  19. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    SciTech Connect

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-05-24

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized.

  20. A mathematical model for foreign body reactions in 2D

    PubMed Central

    Su, Jianzhong; Gonzales, Humberto Perez; Todorov, Michail; Kojouharov, Hristo; Tang, Liping

    2010-01-01

    The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general. PMID:21532988

  1. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  2. Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation

    DTIC Science & Technology

    2009-05-20

    Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation Shanna-Shaye Forbes Electrical Engineering and Computer Sciences...MAY 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Real-time C Code Generation in Ptolemy II for the Giotto...periodic and there are multiple modes of operation. Ptolemy II is a university based open source modeling and simulation framework that supports model

  3. Recent Developments of the Liège Intranuclear Cascade Model in View of its Use into High-energy Transport Codes

    NASA Astrophysics Data System (ADS)

    Leray, S.; Boudard, A.; Braunn, B.; Cugnon, J.; David, J. C.; Leprince, A.; Mancusi, D.

    2014-04-01

    Recent extensions of the Liège Intranuclear Cascade model, INCL, at energies below 100 MeV and for light-ion (up to oxygen) induced reactions are reported. Comparisons with relevant experimental data are shown. The model has been implemented into several high-energy transport codes allowing simulations in a wide domain of applications. Examples of simulations performed for spallation targets with the model implemented into MCNPX and in the domain of medical applications with GEANT4 are presented.

  4. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  5. Model of defect reactions and the influence of clustering in pulse-neutron-irradiated Si

    SciTech Connect

    Myers, S. M.; Cooper, P. J.; Wampler, W. R.

    2008-08-15

    Transient reactions among irradiation defects, dopants, impurities, and carriers in pulse-neutron-irradiated Si were modeled taking into account the clustering of the primal defects in recoil cascades. Continuum equations describing the diffusion, field drift, and reactions of relevant species were numerically solved for a submicrometer spherical volume, within which the starting radial distributions of defects could be varied in accord with the degree of clustering. The radial profiles corresponding to neutron irradiation were chosen through pair-correlation-function analysis of vacancy and interstitial distributions obtained from the binary-collision code MARLOWE, using a spectrum of primary recoil energies computed for a fast-burst fission reactor. Model predictions of transient behavior were compared with a variety of experimental results from irradiated bulk Si, solar cells, and bipolar-junction transistors. The influence of defect clustering during neutron bombardment was further distinguished through contrast with electron irradiation, where the primal point defects are more uniformly dispersed.

  6. Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

    DTIC Science & Technology

    2016-06-01

    Modeling Codes Co as ta l a nd H yd ra ul ic s La bo ra to ry Hwai-Ping Cheng, Stephen M. England, and Clarissa M. Murray June 2016...Flood & Coastal Storm Damage Reduction Program ERDC/CHL TR-16-6 June 2016 Seepage and Piping through Levees and Dikes Using 2D and 3D Modeling Codes ...TYPE Final Report 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

  7. A Computer Code for the Calculation of NLTE Model Atmospheres Using ALI

    NASA Astrophysics Data System (ADS)

    Kubát, J.

    2003-01-01

    A code for calculation of NLTE model atmospheres in hydrostatic and radiative equilibrium in either spherically symmetric or plane parallel geometry is described. The method of accelerated lambda iteration is used for the treatment of radiative transfer. Other equations (hydrostatic equilibrium, radiative equilibrium, statistical equilibrium, optical depth) are solved using the Newton-Raphson method (linearization). In addition to the standard output of the model atmosphere (dependence of temperature, density, radius, and population numbers on column mass depth) the code enables optional additional outputs for better understanding of processes in the atmosphere. The code is able to calculate model atmospheres of plane-parallel and spherically symmetric semi-infinite atmospheres as well as models of plane parallel and spherical shells. There is also an option for solution of a restricted problem of a NLTE line formation (solution of radiative transfer and statistical equilibrium for a given model atmosphere). The overall scheme of the code is presented.

  8. Development and Calibration of Reaction Models for Multilayered Nanocomposites

    NASA Astrophysics Data System (ADS)

    Vohra, Manav

    This dissertation focuses on the development and calibration of reaction models for multilayered nanocomposites. The nanocomposites comprise sputter deposited alternating layers of distinct metallic elements. Specifically, we focus on the equimolar Ni-Al and Zr-Al multilayered systems. Computational models are developed to capture the transient reaction phenomena as well as understand the dependence of reaction properties on the microstructure, composition and geometry of the multilayers. Together with the available experimental data, simulations are used to calibrate the models and enhance the accuracy of their predictions. Recent modeling efforts for the Ni-Al system have investigated the nature of self-propagating reactions in the multilayers. Model fidelity was enhanced by incorporating melting effects due to aluminum [Besnoin et al. (2002)]. Salloum and Knio formulated a reduced model to mitigate computational costs associated with multi-dimensional reaction simulations [Salloum and Knio (2010a)]. However, exist- ing formulations relied on a single Arrhenius correlation for diffusivity, estimated for the self-propagating reactions, and cannot be used to quantify mixing rates at lower temperatures within reasonable accuracy [Fritz (2011)]. We thus develop a thermal model for a multilayer stack comprising a reactive Ni-Al bilayer (nanocalorimeter) and exploit temperature evolution measurements to calibrate the diffusion parameters associated with solid state mixing (≈720 K - 860 K) in the bilayer. The equimolar Zr-Al multilayered system when reacted aerobically is shown to exhibit slow aerobic oxidation of zirconium (in the intermetallic), sustained for about 2-10 seconds after completion of the formation reaction. In a collaborative effort, we aim to exploit the sustained heat release for bio-agent defeat application. A simplified computational model is developed to capture the extended reaction regime characterized by oxidation of Zr-Al multilayers

  9. Chemical and mathematical modeling of asphaltene reaction pathways

    SciTech Connect

    Salvage, P.E.

    1986-01-01

    Precipitated asphaltene was subjected to pyrolysis and hydropyrolysis, both neat and in solvents, and catalytic hydroprocessing. A solvent extraction procedure defined gas, maltene, asphaltene, and coke product fractions. The apparent first order rate constant for asphaltene conversion at 400/sup 0/C was relatively insensitive to the particular reaction scheme. The yield of gases likewise showed little variation and was always less than 10%. On the other hand, the maltene and coke yields were about 20% and 60%, respectively, from neat pyrolysis, and about 60% and less than 5%, respectively, from catalytic reactions. The temporal variations of the product fractions allowed discernment of asphaltene reaction pathways. The primary reaction of asphaltene was to residual asphaltene, maltenes, and gases. The residual asphaltene reacted thermally to coke and catalytically to maltenes at the expense of coke. Secondary degradation of these primary products led to lighter compounds. Reaction mechanism for pyrolysis of asphaltene model compounds and alkylaromstics were determined. The model compound kinetics results were combined with a stochastic description of asphaltene structure in a mathematical model of asphaltene pyrolysis. Individual molecular product were assigned to either the gas, maltene, asphaltene, or coke product fractions, and summation of the weights of each constituted the model's predictions. The temporal variation of the product fractions from simulated asphaltene pyrolysis compared favorably with experimental results.

  10. First principles based mean field model for oxygen reduction reaction.

    PubMed

    Jinnouchi, Ryosuke; Kodama, Kensaku; Hatanaka, Tatsuya; Morimoto, Yu

    2011-12-21

    A first principles-based mean field model was developed for the oxygen reduction reaction (ORR) taking account of the coverage- and material-dependent reversible potentials of the elementary steps. This model was applied to the simulation of single crystal surfaces of Pt, Pt alloy and Pt core-shell catalysts under Ar and O(2) atmospheres. The results are consistent with those shown by past experimental and theoretical studies on surface coverages under Ar atmosphere, the shape of the current-voltage curve for the ORR on Pt(111) and the material-dependence of the ORR activity. This model suggests that the oxygen associative pathway including HO(2)(ads) formation is the main pathway on Pt(111), and that the rate determining step (RDS) is the removal step of O(ads) on Pt(111). This RDS is accelerated on several highly active Pt alloys and core-shell surfaces, and this acceleration decreases the reaction intermediate O(ads). The increase in the partial pressure of O(2)(g) increases the surface coverage with O(ads) and OH(ads), and this coverage increase reduces the apparent reaction order with respect to the partial pressure to less than unity. This model shows details on how the reaction pathway, RDS, surface coverages, Tafel slope, reaction order and material-dependent activity are interrelated.

  11. Stability Analysis of a Model for Foreign Body Fibrotic Reactions

    PubMed Central

    Ibraguimov, A.; Owens, L.; Su, J.; Tang, L.

    2012-01-01

    Implanted medical devices often trigger immunological and inflammatory reactions from surrounding tissues. The foreign body-mediated tissue responses may result in varying degrees of fibrotic tissue formation. There is an intensive research interest in the area of wound healing modeling, and quantitative methods are proposed to systematically study the behavior of this complex system of multiple cells, proteins, and enzymes. This paper introduces a kinetics-based model for analyzing reactions of various cells/proteins and biochemical processes as well as their transient behavior during the implant healing in 2-dimensional space. In particular, we provide a detailed modeling study of different roles of macrophages (MΦ) and their effects on fibrotic reactions. The main mathematical result indicates that the stability of the inflamed steady state depends primarily on the reaction dynamics of the system. However, if the said equilibrium is unstable by its reaction-only system, the spatial diffusion and chemotactic effects can help to stabilize when the model is dominated by classical and regulatory macrophages over the inflammatory macrophages. The mathematical proof and counter examples are given for these conclusions. PMID:23193430

  12. Higher-order ionosphere modeling for CODE's next reprocessing activities

    NASA Astrophysics Data System (ADS)

    Lutz, S.; Schaer, S.; Meindl, M.; Dach, R.; Steigenberger, P.

    2009-12-01

    CODE (the Center for Orbit Determination in Europe) is a joint venture between the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Agency for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Institut für Astronomische und Phsyikalische Geodäsie of the Technische Universität München (IAPG/TUM, Munich, Germany). It acts as one of the global analysis centers of the International GNSS Service (IGS) and participates in the first IGS reprocessing campaign, a full reanalysis of GPS data collected since 1994. For a future reanalyis of the IGS data it is planned to consider not only first-order but also higher-order ionosphere terms in the space geodetic observations. There are several works (e.g. Fritsche et al. 2005), which showed a significant and systematic influence of these effects on the analysis results. The development version of the Bernese Software used at CODE is expanded by the ability to assign additional (scaling) parameters to each considered higher-order ionosphere term. By this, each correction term can be switched on and off on normal-equation level and, moreover, the significance of each correction term may be verified on observation level for different ionosphere conditions.

  13. Pattern-based video coding with dynamic background modeling

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    The existing video coding standard H.264 could not provide expected rate-distortion (RD) performance for macroblocks (MBs) with both moving objects and static background and the MBs with uncovered background (previously occluded). The pattern-based video coding (PVC) technique partially addresses the first problem by separating and encoding moving area and skipping background area at block level using binary pattern templates. However, the existing PVC schemes could not outperform the H.264 with significant margin at high bit rates due to the least number of MBs classified using the pattern mode. Moreover, both H.264 and the PVC scheme could not provide the expected RD performance for the uncovered background areas due to the unavailability of the reference areas in the existing approaches. In this paper, we propose a new PVC technique which will use the most common frame in a scene (McFIS) as a reference frame to overcome the problems. Apart from the use of McFIS as a reference frame, we also introduce a content-dependent pattern generation strategy for better RD performance. The experimental results confirm the superiority of the proposed schemes in comparison with the existing PVC and the McFIS-based methods by achieving significant image quality gain at a wide range of bit rates.

  14. Synthesis of superheavy elements: Uncertainty analysis to improve the predictive power of reaction models

    NASA Astrophysics Data System (ADS)

    Lü, Hongliang; Boilley, David; Abe, Yasuhisa; Shen, Caiwan

    2016-09-01

    Background: Synthesis of superheavy elements is performed by heavy-ion fusion-evaporation reactions. However, fusion is known to be hindered with respect to what can be observed with lighter ions. Thus some delicate ambiguities remain on the fusion mechanism that eventually lead to severe discrepancies in the calculated formation probabilities coming from different fusion models. Purpose: In the present work, we propose a general framework based upon uncertainty analysis in the hope of constraining fusion models. Method: To quantify uncertainty associated with the formation probability, we propose to propagate uncertainties in data and parameters using the Monte Carlo method in combination with a cascade code called kewpie2, with the aim of determining the associated uncertainty, namely the 95 % confidence interval. We also investigate the impact of different models or options, which cannot be modeled by continuous probability distributions, on the final results. An illustrative example is presented in detail and then a systematic study is carried out for a selected set of cold-fusion reactions. Results: It is rigorously shown that, at the 95 % confidence level, the total uncertainty of the empirical formation probability appears comparable to the discrepancy between calculated values. Conclusions: The results obtained from the present study provide direct evidence for predictive limitations of the existing fusion-evaporation models. It is thus necessary to find other ways to assess such models for the purpose of establishing a more reliable reaction theory, which is expected to guide future experiments on the production of superheavy elements.

  15. The fundamental diagram of pedestrian model with slow reaction

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Qin, Zheng; Hu, Hao; Xu, Zhaohui; Li, Huan

    2012-12-01

    The slow-to-start models are a classical cellular automata model in simulating vehicle traffic. However, to our knowledge, the slow-to-start effect has not been considered in modeling pedestrian dynamics. We verify the similar behavior between pedestrian and vehicle, and propose an new lattice gas (LG) model called the slow reaction (SR) model to describe the pedestrian’s delayed reaction in single-file movement. We simulate and reproduce Seyfried’s field experiments at the Research Centre Jülich, and use its empirical data to validate our SR model. We compare the SR model with the standard LG model. We tested different probabilities of slow reaction ps in the SR model and found the simulation data of ps=0.3 fit the empirical data best. The RMS error of the mean velocity of the SR model is smaller than that of the standard LG model. In the range of ps=0.1-0.3, our fundamental diagram between velocity and density by simulation coincides with field experiments. The distribution of individual velocity in the fundamental diagram in the SR model agrees with the empirical data better than that of the standard LG model. In addition, we observe stop-and-go waves and phase separation in pedestrian flow by simulation. We reproduced the phenomena of uneven distribution of interspaces by the SR model while the standard LG model did not. The SR model can reproduce the evolution of spatio-temporal structures of pedestrian flow with higher fidelity to Seyfried’s experiments than the standard LG model.

  16. Dysregulation of REST-regulated coding and non-coding RNAs in a cellular model of Huntington's disease.

    PubMed

    Soldati, Chiara; Bithell, Angela; Johnston, Caroline; Wong, Kee-Yew; Stanton, Lawrence W; Buckley, Noel J

    2013-02-01

    Huntingtin (Htt) protein interacts with many transcriptional regulators, with widespread disruption to the transcriptome in Huntington's disease (HD) brought about by altered interactions with the mutant Htt (muHtt) protein. Repressor Element-1 Silencing Transcription Factor (REST) is a repressor whose association with Htt in the cytoplasm is disrupted in HD, leading to increased nuclear REST and concomitant repression of several neuronal-specific genes, including brain-derived neurotrophic factor (Bdnf). Here, we explored a wide set of HD dysregulated genes to identify direct REST targets whose expression is altered in a cellular model of HD but that can be rescued by knock-down of REST activity. We found many direct REST target genes encoding proteins important for nervous system development, including a cohort involved in synaptic transmission, at least two of which can be rescued at the protein level by REST knock-down. We also identified several microRNAs (miRNAs) whose aberrant repression is directly mediated by REST, including miR-137, which has not previously been shown to be a direct REST target in mouse. These data provide evidence of the contribution of inappropriate REST-mediated transcriptional repression to the widespread changes in coding and non-coding gene expression in a cellular model of HD that may affect normal neuronal function and survival.

  17. Implementation of a vibrationally linked chemical reaction model for DSMC

    NASA Technical Reports Server (NTRS)

    Carlson, A. B.; Bird, Graeme A.

    1994-01-01

    A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.

  18. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  19. Polymerase chain reaction-mediated gene synthesis: synthesis of a gene coding for isozyme c of horseradish peroxidase.

    PubMed Central

    Jayaraman, K; Fingar, S A; Shah, J; Fyles, J

    1991-01-01

    The synthesis of a gene coding for horseradish peroxidase (HRP, isozyme c; EC 1.11.1.7) is described using a polymerase chain reaction (PCR)-mediated gene synthesis approach developed in our laboratory. In this approach, all the oligonucleotides making up the gene are ligated in a single step by using the two outer oligonucleotides as PCR primers and the crude ligation mixture as the target. The PCR facilitates synthesis and purification of the gene simultaneously. The gene for HRP was synthesized by ligating all 40 oligonucleotides in a single step followed by PCR amplification. The gene was also synthesized from its fragments by using an overlap extension method similar to the procedure as described [Horton, R. M., Hunt, H. D., Ho, S. N., Pullen, J. K. & Pease, L. R. (1989) Gene 77, 61-68]. A method for combining different DNA fragments, in-frame, by using the PCR was also developed and used to synthesize the HRP gene from its gene fragments. This method is applicable to the synthesis of even larger genes and to combine any DNA fragments in-frame. After the synthesis, preliminary characterization of the HRP gene was also carried out by the PCR to confirm the arrangement of oligonucleotides in the gene. This was done by carrying out the PCR with several sets of primers along the gene and comparing the product sizes with the expected sizes. The gene and the fragments generated by PCR were cloned in Escherichia coli and the sequence was confirmed by manual and automated DNA sequencing. Images PMID:1851991

  20. Polymerase chain reaction-mediated gene synthesis: synthesis of a gene coding for isozyme c of horseradish peroxidase.

    PubMed

    Jayaraman, K; Fingar, S A; Shah, J; Fyles, J

    1991-05-15

    The synthesis of a gene coding for horseradish peroxidase (HRP, isozyme c; EC 1.11.1.7) is described using a polymerase chain reaction (PCR)-mediated gene synthesis approach developed in our laboratory. In this approach, all the oligonucleotides making up the gene are ligated in a single step by using the two outer oligonucleotides as PCR primers and the crude ligation mixture as the target. The PCR facilitates synthesis and purification of the gene simultaneously. The gene for HRP was synthesized by ligating all 40 oligonucleotides in a single step followed by PCR amplification. The gene was also synthesized from its fragments by using an overlap extension method similar to the procedure as described [Horton, R. M., Hunt, H. D., Ho, S. N., Pullen, J. K. & Pease, L. R. (1989) Gene 77, 61-68]. A method for combining different DNA fragments, in-frame, by using the PCR was also developed and used to synthesize the HRP gene from its gene fragments. This method is applicable to the synthesis of even larger genes and to combine any DNA fragments in-frame. After the synthesis, preliminary characterization of the HRP gene was also carried out by the PCR to confirm the arrangement of oligonucleotides in the gene. This was done by carrying out the PCR with several sets of primers along the gene and comparing the product sizes with the expected sizes. The gene and the fragments generated by PCR were cloned in Escherichia coli and the sequence was confirmed by manual and automated DNA sequencing.

  1. A chain reaction approach to modelling gene pathways

    PubMed Central

    Cheng, Gary C.; Chen, Dung-Tsa; Chen, James J.; Soong, Seng-jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-01-01

    Background Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. Results In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By

  2. A chain reaction approach to modelling gene pathways.

    PubMed

    Cheng, Gary C; Chen, Dung-Tsa; Chen, James J; Soong, Seng-Jaw; Lamartiniere, Coral; Barnes, Stephen

    2012-08-01

    BACKGROUND: Of great interest in cancer prevention is how nutrient components affect gene pathways associated with the physiological events of puberty. Nutrient-gene interactions may cause changes in breast or prostate cells and, therefore, may result in cancer risk later in life. Analysis of gene pathways can lead to insights about nutrient-gene interactions and the development of more effective prevention approaches to reduce cancer risk. To date, researchers have relied heavily upon experimental assays (such as microarray analysis, etc.) to identify genes and their associated pathways that are affected by nutrient and diets. However, the vast number of genes and combinations of gene pathways, coupled with the expense of the experimental analyses, has delayed the progress of gene-pathway research. The development of an analytical approach based on available test data could greatly benefit the evaluation of gene pathways, and thus advance the study of nutrient-gene interactions in cancer prevention. In the present study, we have proposed a chain reaction model to simulate gene pathways, in which the gene expression changes through the pathway are represented by the species undergoing a set of chemical reactions. We have also developed a numerical tool to solve for the species changes due to the chain reactions over time. Through this approach we can examine the impact of nutrient-containing diets on the gene pathway; moreover, transformation of genes over time with a nutrient treatment can be observed numerically, which is very difficult to achieve experimentally. We apply this approach to microarray analysis data from an experiment which involved the effects of three polyphenols (nutrient treatments), epigallo-catechin-3-O-gallate (EGCG), genistein, and resveratrol, in a study of nutrient-gene interaction in the estrogen synthesis pathway during puberty. RESULTS: In this preliminary study, the estrogen synthesis pathway was simulated by a chain reaction model. By

  3. Modeling human behaviors and reactions under dangerous environment.

    PubMed

    Kang, J; Wright, D K; Qin, S F; Zhao, Y

    2005-01-01

    This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.

  4. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.

  5. Field-based tests of geochemical modeling codes using New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.

  6. Documentation for grants equal to tax model: Volume 3, Source code

    SciTech Connect

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations.

  7. An Advanced simulation Code for Modeling Inductive Output Tubes

    SciTech Connect

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  8. Modeling of Ionization Physics with the PIC Code OSIRIS

    SciTech Connect

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  9. Mathematical properties of models of the reaction-diffusion type

    NASA Astrophysics Data System (ADS)

    Beccaria, M.; Soliani, G.

    Nonlinear systems of the reaction-diffusion (RD) type, including Gierer-Meinhardt models of autocatalysis, are studied using Lie algebras coming from their prolongation structure. Depending on the form of the functions of the fields characterizing the reactions among them, we consider both quadratic and cubic RD equations. On the basis of the prolongation algebra associated with a given RD model, we distinguish the model as a completely linearizable or a partially linearizable system. In this classification a crucial role is played by the relative sign of the diffusion coefficients, which strongly influence the properties of the system. In correspondence to the above situations, different algebraic characterizations, together with exact and approximate solutions, are found. Interesting examples are the quadratic RD model, which admits an exact solution in terms of the elliptic Weierstrass function, and the cubic Gierer-Meinhardt model, whose prolongation algebra leads to the similitude group in the plane.

  10. Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2

    NASA Technical Reports Server (NTRS)

    Daley, P. L.; Owens, S. F.

    1988-01-01

    A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.

  11. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

    SciTech Connect

    Winters, W.S.; Evans, G.H.; Moen, C.D.

    1996-10-01

    This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

  12. Transport-reaction model for defect and carrier behavior within displacement cascades in gallium arsenide

    SciTech Connect

    Wampler, William R.; Myers, Samuel M.

    2014-02-01

    A model is presented for recombination of charge carriers at displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers and defects within a representative spherically symmetric cluster. The initial radial defect profiles within the cluster were chosen through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Charging of the defects can produce high electric fields within the cluster which may influence transport and reaction of carriers and defects, and which may enhance carrier recombination through band-to-trap tunneling. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to pulsed neutron irradiation.

  13. Including Rebinding Reactions in Well-Mixed Models of Distributive Biochemical Reactions.

    PubMed

    Lawley, Sean D; Keener, James P

    2016-11-15

    The behavior of biochemical reactions requiring repeated enzymatic substrate modification depends critically on whether the enzymes act processively or distributively. Whereas processive enzymes bind only once to a substrate before carrying out a sequence of modifications, distributive enzymes release the substrate after each modification and thus require repeated bindings. Recent experimental and computational studies have revealed that distributive enzymes can act processively due to rapid rebindings (so-called quasi-processivity). In this study, we derive an analytical estimate of the probability of rapid rebinding and show that well-mixed ordinary differential equation models can use this probability to quantitatively replicate the behavior of spatial models. Importantly, rebinding requires that connections be added to the well-mixed reaction network; merely modifying rate constants is insufficient. We then use these well-mixed models to suggest experiments to 1) detect quasi-processivity and 2) test the theory. Finally, we show that rapid rebindings drastically alter the reaction's Michaelis-Menten rate equations.

  14. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.

  15. Numerical modelling of spallation in 2D hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Maw, J. R.; Giles, A. R.

    1996-05-01

    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  16. Modeling the Reaction of Fe Atoms with CCl4

    SciTech Connect

    Camaioni, Donald M.; Ginovska, Bojana; Dupuis, Michel

    2009-01-05

    The reaction of zero-valent iron with carbon tetrachloride (CCl4) in gas phase was studied using density functional theory. Temperature programmed desorption experiments over a range of Fe and CCl4 coverages on a FeO(111) surface, demonstrate a rich surface chemistry with several reaction products (C2Cl4, C2Cl6, OCCl2, CO, FeCl2, FeCl3) observed. The reactivity of Fe and CCl4 was studied under three stoichiometries, one Fe with one CCl4, one Fe with two CCl4 molecules and two Fe with one CCl4, modeling the environment of the experimental work. The electronic structure calculations give insight into the reactions leading to the experimentally observed products and suggest that novel Fe-C-Cl containing species are important intermediates in these reactions. The intermediate complexes are formed in highly exothermic reactions, in agreement with the experimentally observed reactivity with the surface at low temperature (30 K). This initial survey of the reactivity of Fe with CCl4 identifies some potential reaction pathways that are important in the effort to use Fe nano-particles to differentiate harmful pathways that lead to the formation of contaminants like chloroform (CHCl3) from harmless pathways that lead to products such as formate (HCO2-) or carbon oxides in water and soil. The Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy.

  17. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  18. Publicly Available Numerical Codes for Modeling the X-ray and Microwave Emissions from Solar and Stellar Activity

    NASA Technical Reports Server (NTRS)

    Holman, Gordon D.; Mariska, John T.; McTiernan, James M.; Ofman, Leon; Petrosian, Vahe; Ramaty, Reuven; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We have posted numerical codes on the Web for modeling the bremsstrahlung x-ray emission and the a gyrosynchrotron radio emission from solar and stellar activity. In addition to radiation codes, steady-state and time-dependent Fokker-Planck codes are provided for computing the distribution and evolution of accelerated electrons. A 1-D hydrodynamics code computes the response of the stellar atmosphere (chromospheric evaporation). A code for modeling gamma-ray line spectra is also available. On-line documentation is provided for each code. These codes have been developed for modeling results from the High Energy Solar Spectroscopic Imager (HESSI) along related microwave observations of solar flares. Comprehensive codes for modeling images and spectra of solar flares are under development. The posted codes can be obtained on NASA/Goddard's HESSI Web Site at http://hesperia.gsfc.nasa.gov/hessi/modelware.htm. This work is supported in part by the NASA Sun-Earth Connection Program.

  19. A Multiple Reaction Modelling Framework for Microbial Electrochemical Technologies

    PubMed Central

    Oyetunde, Tolutola; Sarma, Priyangshu M.; Ahmad, Farrukh; Rodríguez, Jorge

    2017-01-01

    A mathematical model for the theoretical evaluation of microbial electrochemical technologies (METs) is presented that incorporates a detailed physico-chemical framework, includes multiple reactions (both at the electrodes and in the bulk phase) and involves a variety of microbial functional groups. The model is applied to two theoretical case studies: (i) A microbial electrolysis cell (MEC) for continuous anodic volatile fatty acids (VFA) oxidation and cathodic VFA reduction to alcohols, for which the theoretical system response to changes in applied voltage and VFA feed ratio (anode-to-cathode) as well as membrane type are investigated. This case involves multiple parallel electrode reactions in both anode and cathode compartments; (ii) A microbial fuel cell (MFC) for cathodic perchlorate reduction, in which the theoretical impact of feed flow rates and concentrations on the overall system performance are investigated. This case involves multiple electrode reactions in series in the cathode compartment. The model structure captures interactions between important system variables based on first principles and provides a platform for the dynamic description of METs involving electrode reactions both in parallel and in series and in both MFC and MEC configurations. Such a theoretical modelling approach, largely based on first principles, appears promising in the development and testing of MET control and optimization strategies. PMID:28054959

  20. A Multiple Reaction Modelling Framework for Microbial Electrochemical Technologies.

    PubMed

    Oyetunde, Tolutola; Sarma, Priyangshu M; Ahmad, Farrukh; Rodríguez, Jorge

    2017-01-04

    A mathematical model for the theoretical evaluation of microbial electrochemical technologies (METs) is presented that incorporates a detailed physico-chemical framework, includes multiple reactions (both at the electrodes and in the bulk phase) and involves a variety of microbial functional groups. The model is applied to two theoretical case studies: (i) A microbial electrolysis cell (MEC) for continuous anodic volatile fatty acids (VFA) oxidation and cathodic VFA reduction to alcohols, for which the theoretical system response to changes in applied voltage and VFA feed ratio (anode-to-cathode) as well as membrane type are investigated. This case involves multiple parallel electrode reactions in both anode and cathode compartments; (ii) A microbial fuel cell (MFC) for cathodic perchlorate reduction, in which the theoretical impact of feed flow rates and concentrations on the overall system performance are investigated. This case involves multiple electrode reactions in series in the cathode compartment. The model structure captures interactions between important system variables based on first principles and provides a platform for the dynamic description of METs involving electrode reactions both in parallel and in series and in both MFC and MEC configurations. Such a theoretical modelling approach, largely based on first principles, appears promising in the development and testing of MET control and optimization strategies.

  1. Examining the role of finite reaction times in swarming models

    NASA Astrophysics Data System (ADS)

    Copenhagen, Katherine; Quint, David; Gopinathan, Ajay

    2015-03-01

    Modeling collective behavior in biological and artificial systems has had much success in recent years at predicting and mimicing real systems by utilizing techniques borrowed from modelling many particle systems interacting with physical forces. However unlike inert particles interacting with instantaneous forces, living organisms have finite reaction times, and behaviors that vary from individual to individual. What constraints do these physiological effects place on the interactions between individuals in order to sustain a robust ordered state? We use a self-propelled agent based model in continuous space based on previous models by Vicsek and Couzin including alignment and separation maintaining interactions to examine the behavior of a single cohesive group of organisms. We found that for very short reaction times the system is able to form an ordered state even in the presence of heterogeneities. However for larger more physiological reaction times organisms need a buffer zone with no cohesive interactions in order to maintain an ordered state. Finally swarms with finite reaction times and behavioral heterogeneities are able to dynamically sort out individuals with impaired function and sustain order.

  2. Transactional Model of Coping, Appraisals, and Emotional Reactions to Stress.

    ERIC Educational Resources Information Center

    Brack, Greg; McCarthy, Christopher J.

    A study investigated the relationship of transactional models of stress management and appraisal-emotion relationships to emotions produced by taking a new job. The participants, 231 graduate students, completed measures of cognitive appraisals, stress coping resources, and emotional reactions at the time of taking a new job and some time later.…

  3. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  4. LWR codes capability to address SFR BDBA scenarios: Modeling of the ABCOVE tests

    SciTech Connect

    Herranz, L. E.; Garcia, M.; Morandi, S.

    2012-07-01

    The sound background built-up in LWR source term analysis in case of a severe accident, make it worth to check the capability of LWR safety analysis codes to model accident SFR scenarios, at least in some areas. This paper gives a snapshot of such predictability in the area of aerosol behavior in containment. To do so, the AB-5 test of the ABCOVE program has been modeled with 3 LWR codes: ASTEC, ECART and MELCOR. Through the search of a best estimate scenario and its comparison to data, it is concluded that even in the specific case of in-containment aerosol behavior, some enhancements would be needed in the LWR codes and/or their application, particularly with respect to consideration of particle shape. Nonetheless, much of the modeling presently embodied in LWR codes might be applicable to SFR scenarios. These conclusions should be seen as preliminary as long as comparisons are not extended to more experimental scenarios. (authors)

  5. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  6. Modeling of transmittance degradation caused by optical surface contamination by atomic oxygen reaction with adsorbed silicones

    NASA Astrophysics Data System (ADS)

    Snyder, Aaron; Banks, Bruce A.; Miller, Sharon K.; Stueber, Thomas; Sechkar, Edward

    2000-09-01

    A numerical procedure is presented to calculate transmittance degradation caused by contaminant films on spacecraft surfaces produced through the interaction of orbital atomic oxygen (AO) with volatile silicones and hydrocarbons from spacecraft components. In the model, contaminant accretion is dependent on the adsorption of species, depletion reactions due to gas-surface collisions, desorption, and surface reactions between AO and silicon producing SiOx (where x is near 2). A detailed description of the procedure used to calculate the constituents of the contaminant layer is presented, including the equations that govern the evolution of fractional coverage by specie type. As an illustrative example of film growth, calculation results using a prototype code that calculates the evolution of surface coverage by specie type is presented and discussed. An example of the transmittance degradation caused by surface interaction of AO with deposited contaminant is presented for the case of exponentially decaying contaminant flux. These examples are performed using hypothetical values for the process parameters.

  7. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  8. Recommended requirements to code officials for solar heating, cooling, and hot water systems. Model document for code officials on solar heating and cooling of buildings

    SciTech Connect

    1980-06-01

    These recommended requirements include provisions for electrical, building, mechanical, and plumbing installations for active and passive solar energy systems used for space or process heating and cooling, and domestic water heating. The provisions in these recommended requirements are intended to be used in conjunction with the existing building codes in each jurisdiction. Where a solar relevant provision is adequately covered in an existing model code, the section is referenced in the Appendix. Where a provision has been drafted because there is no counterpart in the existing model code, it is found in the body of these recommended requirements. Commentaries are included in the text explaining the coverage and intent of present model code requirements and suggesting alternatives that may, at the discretion of the building official, be considered as providing reasonable protection to the public health and safety. Also included is an Appendix which is divided into a model code cross reference section and a reference standards section. The model code cross references are a compilation of the sections in the text and their equivalent requirements in the applicable model codes. (MHR)

  9. Numerical modeling of humic colloid borne americium (III) migration in column experiments using the transport/speciation code K1D and the KICAM model.

    PubMed

    Schüssler, W; Artinger, R; Kim, J I; Bryan, N D; Griffin, D

    2001-02-01

    The humic colloid borne Am(III) transport was investigated in column experiments for Gorleben groundwater/sand systems. It was found that the interaction of Am with humic colloids is kinetically controlled, which strongly influences the migration behavior of Am(III). These kinetic effects have to be taken into account for transport/speciation modeling. The kinetically controlled availability model (KICAM) was developed to describe actinide sorption and transport in laboratory batch and column experiments. Application of the KICAM requires a chemical transport/speciation code, which simultaneously models both kinetically controlled processes and equilibrium reactions. Therefore, the code K1D was developed as a flexible research code that allows the inclusion of kinetic data in addition to transport features and chemical equilibrium. This paper presents the verification of K1D and its application to model column experiments investigating unimpeded humic colloid borne Am migration. Parmeters for reactive transport simulations were determined for a Gorleben groundwater system of high humic colloid concentration (GoHy 2227). A single set of parameters was used to model a series of column experiments. Model results correspond well to experimental data for the unretarded humic borne Am breakthrough.

  10. Numerical modeling of humic colloid borne Americium (III) migration in column experiments using the transport/speciation code K1D and the KICAM model

    NASA Astrophysics Data System (ADS)

    Schüßler, W.; Artinger, R.; Kim, J. I.; Bryan, N. D.; Griffin, D.

    2001-02-01

    The humic colloid borne Am(III) transport was investigated in column experiments for Gorleben groundwater/sand systems. It was found that the interaction of Am with humic colloids is kinetically controlled, which strongly influences the migration behavior of Am(III). These kinetic effects have to be taken into account for transport/speciation modeling. The kinetically controlled availability model (KICAM) was developed to describe actinide sorption and transport in laboratory batch and column experiments. Application of the KICAM requires a chemical transport/speciation code, which simultaneously models both kinetically controlled processes and equilibrium reactions. Therefore, the code K1D was developed as a flexible research code that allows the inclusion of kinetic data in addition to transport features and chemical equilibrium. This paper presents the verification of K1D and its application to model column experiments investigating unimpeded humic colloid borne Am migration. Parameters for reactive transport simulations were determined for a Gorleben groundwater system of high humic colloid concentration (GoHy 2227). A single set of parameters was used to model a series of column experiments. Model results correspond well to experimental data for the unretarded humic borne Am breakthrough.

  11. A computer code for calculations in the algebraic collective model of the atomic nucleus

    NASA Astrophysics Data System (ADS)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  12. Kinetic modelling of GlmU reactions - prioritization of reaction for therapeutic application.

    PubMed

    Singh, Vivek K; Das, Kaveri; Seshadri, Kothandaraman

    2012-01-01

    Mycobacterium tuberculosis(Mtu), a successful pathogen, has developed resistance against the existing anti-tubercular drugs necessitating discovery of drugs with novel action. Enzymes involved in peptidoglycan biosynthesis are attractive targets for antibacterial drug discovery. The bifunctional enzyme mycobacterial GlmU (Glucosamine 1-phosphate N-acetyltransferase/ N-acetylglucosamine-1-phosphate uridyltransferase) has been a target enzyme for drug discovery. Its C- and N- terminal domains catalyze acetyltransferase (rxn-1) and uridyltransferase (rxn-2) activities respectively and the final product is involved in peptidoglycan synthesis. However, the bifunctional nature of GlmU poses difficulty in deciding which function to be intervened for therapeutic advantage. Genetic analysis showed this as an essential gene but it is still unclear whether any one or both of the activities are critical for cell survival. Often enzymatic activity with suitable high-throughput assay is chosen for random screening, which may not be the appropriate biological function inhibited for maximal effect. Prediction of rate-limiting function by dynamic network analysis of reactions could be an option to identify the appropriate function. With a view to provide insights into biochemical assays with appropriate activity for inhibitor screening, kinetic modelling studies on GlmU were undertaken. Kinetic model of Mtu GlmU-catalyzed reactions was built based on the available kinetic data on Mtu and deduction from Escherichia coli data. Several model variants were constructed including coupled/decoupled, varying metabolite concentrations and presence/absence of product inhibitions. This study demonstrates that in coupled model at low metabolite concentrations, inhibition of either of the GlmU reactions cause significant decrement in the overall GlmU rate. However at higher metabolite concentrations, rxn-2 showed higher decrement. Moreover, with available intracellular concentration of the

  13. Dense Coding in a Two-Spin Squeezing Model with Intrinsic Decoherence

    NASA Astrophysics Data System (ADS)

    Zhang, Bing-Bing; Yang, Guo-Hui

    2016-11-01

    Quantum dense coding in a two-spin squeezing model under intrinsic decoherence with different initial states (Werner state and Bell state) is investigated. It shows that dense coding capacity χ oscillates with time and finally reaches different stable values. χ can be enhanced by decreasing the magnetic field Ω and the intrinsic decoherence γ or increasing the squeezing interaction μ, moreover, one can obtain a valid dense coding capacity ( χ satisfies χ > 1) by modulating these parameters. The stable value of χ reveals that the decoherence cannot entirely destroy the dense coding capacity. In addition, decreasing Ω or increasing μ can not only enhance the stable value of χ but also impair the effects of decoherence. As the initial state is the Werner state, the purity r of initial state plays a key role in adjusting the value of dense coding capacity, χ can be significantly increased by improving the purity of initial state. For the initial state is Bell state, the large spin squeezing interaction compared with the magnetic field guarantees the optimal dense coding. One cannot always achieve a valid dense coding capacity for the Werner state, while for the Bell state, the dense coding capacity χ remains stuck at the range of greater than 1.

  14. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    Elsyca CP Master was selected as the best basis for the Underwater Hull Analysis Model; however, additional work performed with COMSOL Multiphysics...since the selection indicates that COMSOL should be re-evaluated if the Underwater Hull Analysis Model program is renewed at some future date. 05-02...8 4.3 COMSOL MULTIPHYSICS ......................................................................................... 10 4.4 ELSYCA CP

  15. Complex reaction noise in a molecular quasispecies model

    NASA Astrophysics Data System (ADS)

    Hochberg, David; Zorzano, María-Paz; Morán, Federico

    2006-05-01

    We have derived exact Langevin equations for a model of quasispecies dynamics. The inherent multiplicative reaction noise is complex and its statistical properties are specified completely. The numerical simulation of the complex Langevin equations is carried out using the Cholesky decomposition for the noise covariance matrix. This internal noise, which is due to diffusion-limited reactions, produces unavoidable spatio-temporal density fluctuations about the mean field value. In two dimensions, this noise strictly vanishes only in the perfectly mixed limit, a situation difficult to attain in practice.

  16. Code modernization and modularization of APEX and SWAT watershed simulation models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  17. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  18. Spatiotemporal patterns in a reaction-diffusion model with the Degn-Harrison reaction scheme

    NASA Astrophysics Data System (ADS)

    Peng, Rui; Yi, Feng-qi; Zhao, Xiao-qiang

    Spatial and temporal patterns generated in ecological and chemical systems have become a central object of research in recent decades. In this work, we are concerned with a reaction-diffusion model with the Degn-Harrison reaction scheme, which accounts for the qualitative feature of the respiratory process in a Klebsiella aerogenes bacterial culture. We study the global stability of the constant steady state, existence and nonexistence of nonconstant steady states as well as the Hopf and steady state bifurcations. In particular, our results show the existence of Turing patterns and inhomogeneous periodic oscillatory patterns while the system parameters are all spatially homogeneous. These results also exhibit the critical role of the system parameters in leading to the formation of spatiotemporal patterns.

  19. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    SciTech Connect

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  20. Extension of the Liège intranuclear-cascade model to reactions induced by light nuclei

    NASA Astrophysics Data System (ADS)

    Mancusi, Davide; Boudard, Alain; Cugnon, Joseph; David, Jean-Christophe; Kaitaniemi, Pekka; Leray, Sylvie

    2014-11-01

    The purpose of this paper is twofold. First, we present the extension of the Liège intranuclear-cascade model to reactions induced by light ions. We describe here the ideas upon which we built our treatment of nucleus-nucleus reactions and we compare the model predictions against a vast set of heterogeneous experimental data. In spite of the discussed limitations of the intranuclear-cascade scheme, we find that our model yields valid predictions for a number of observables and positions itself as one of the most attractive alternatives available to geant4 users for the simulation of light-ion-induced reactions. Second, we describe the c++ version of the code, which is physicswise equivalent to the legacy version, is available in geant4, and will serve as the basis for all future development of the model.

  1. Once-through CANDU reactor models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; Bjerke, M.A.

    1980-11-01

    Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.

  2. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    SciTech Connect

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  3. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  4. On models of the genetic code generated by binary dichotomic algorithms.

    PubMed

    Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz

    2015-02-01

    In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher.

  5. New higher-order Godunov code for modelling performance of two-stage light gas guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Miller, R. J.

    1995-01-01

    A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

  6. Engine structures modeling software system: Computer code. User's manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.

  7. Comparing the line broadened quasilinear model to Vlasov code

    NASA Astrophysics Data System (ADS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  8. Summary of the models and methods for the FEHM application - a finite-element heat- and mass-transfer code

    SciTech Connect

    Zyvoloski, G.A.; Robinson, B.A.; Dash, Z.V.; Trease, L.L.

    1997-07-01

    The mathematical models and numerical methods employed by the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multi-component flow in porous media, are described. The use of this code is applicable to natural-state studies of geothermal systems and groundwater flow. A primary use of the FEHM application will be to assist in the understanding of flow fields and mass transport in the saturated and unsaturated zones below the proposed Yucca Mountain nuclear waste repository in Nevada. The component models of FEHM are discussed. The first major component, Flow- and Energy-Transport Equations, deals with heat conduction; heat and mass transfer with pressure- and temperature-dependent properties, relative permeabilities and capillary pressures; isothermal air-water transport; and heat and mass transfer with noncondensible gas. The second component, Dual-Porosity and Double-Porosity/Double-Permeability Formulation, is designed for problems dominated by fracture flow. Another component, The Solute-Transport Models, includes both a reactive-transport model that simulates transport of multiple solutes with chemical reaction and a particle-tracking model. Finally, the component, Constitutive Relationships, deals with pressure- and temperature-dependent fluid/air/gas properties, relative permeabilities and capillary pressures, stress dependencies, and reactive and sorbing solutes. Each of these components is discussed in detail, including purpose, assumptions and limitations, derivation, applications, numerical method type, derivation of numerical model, location in the FEHM code flow, numerical stability and accuracy, and alternative approaches to modeling the component.

  9. Classic and contemporary approaches to modeling biochemical reactions

    PubMed Central

    Chen, William W.; Niepel, Mario; Sorger, Peter K.

    2010-01-01

    Recent interest in modeling biochemical networks raises questions about the relationship between often complex mathematical models and familiar arithmetic concepts from classical enzymology, and also about connections between modeling and experimental data. This review addresses both topics by familiarizing readers with key concepts (and terminology) in the construction, validation, and application of deterministic biochemical models, with particular emphasis on a simple enzyme-catalyzed reaction. Networks of coupled ordinary differential equations (ODEs) are the natural language for describing enzyme kinetics in a mass action approximation. We illustrate this point by showing how the familiar Briggs-Haldane formulation of Michaelis-Menten kinetics derives from the outer (or quasi-steady-state) solution of a dynamical system of ODEs describing a simple reaction under special conditions. We discuss how parameters in the Michaelis-Menten approximation and in the underlying ODE network can be estimated from experimental data, with a special emphasis on the origins of uncertainty. Finally, we extrapolate from a simple reaction to complex models of multiprotein biochemical networks. The concepts described in this review, hitherto of interest primarily to practitioners, are likely to become important for a much broader community of cellular and molecular biologists attempting to understand the promise and challenges of “systems biology” as applied to biochemical mechanisms. PMID:20810646

  10. Modelling biochemical reaction systems by stochastic differential equations with reflection.

    PubMed

    Niu, Yuanling; Burrage, Kevin; Chen, Luonan

    2016-05-07

    In this paper, we gave a new framework for modelling and simulating biochemical reaction systems by stochastic differential equations with reflection not in a heuristic way but in a mathematical way. The model is computationally efficient compared with the discrete-state Markov chain approach, and it ensures that both analytic and numerical solutions remain in a biologically plausible region. Specifically, our model mathematically ensures that species numbers lie in the domain D, which is a physical constraint for biochemical reactions, in contrast to the previous models. The domain D is actually obtained according to the structure of the corresponding chemical Langevin equations, i.e., the boundary is inherent in the biochemical reaction system. A variant of projection method was employed to solve the reflected stochastic differential equation model, and it includes three simple steps, i.e., Euler-Maruyama method was applied to the equations first, and then check whether or not the point lies within the domain D, and if not perform an orthogonal projection. It is found that the projection onto the closure D¯ is the solution to a convex quadratic programming problem. Thus, existing methods for the convex quadratic programming problem can be employed for the orthogonal projection map. Numerical tests on several important problems in biological systems confirmed the efficiency and accuracy of this approach.

  11. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  12. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  13. Implementation of a new model for gravitational collision cross sections in nuclear aerosol codes

    SciTech Connect

    Buckley, R.L.; Loyalka, S.K.

    1995-03-01

    Models currently used in aerosol source codes for the gravitational collision efficiency are deficient in not accounting fully for two particle hydrodynamics (interception and inertia), which becomes important for larger particles. A computer code that accounts for these effects in calculating particle trajectories is used to find values of efficiency for a range of particle sizes. Simple fits to these data as a function of large particle diameter for a given particle diameter ratio are then obtained using standard linear regression, and a new model is constructed. This model is then implemented into two computer codes. AEROMECH and CONTAIN, Version 1.2 For a test problem, concentration distributions obtained with the new model and the standard model for efficiency are found to be markedly different.

  14. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  15. Savannah River Laboratory DOSTOMAN code: a compartmental pathways computer model of contaminant transport

    SciTech Connect

    King, C M; Wilhite, E L; Root, Jr, R W; Fauth, D J; Routt, K R; Emslie, R H; Beckmeyer, R R; Fjeld, R A; Hutto, G A; Vandeven, J A

    1985-01-01

    The Savannah River Laboratory DOSTOMAN code has been used since 1978 for environmental pathway analysis of potential migration of radionuclides and hazardous chemicals. The DOSTOMAN work is reviewed including a summary of historical use of compartmental models, the mathematical basis for the DOSTOMAN code, examples of exact analytical solutions for simple matrices, methods for numerical solution of complex matrices, and mathematical validation/calibration of the SRL code. The review includes the methodology for application to nuclear and hazardous chemical waste disposal, examples of use of the model in contaminant transport and pathway analysis, a user's guide for computer implementation, peer review of the code, and use of DOSTOMAN at other Department of Energy sites. 22 refs., 3 figs.

  16. User manual for ATILA, a finite-element code for modeling piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Decarpigny, Jean-Noel; Debus, Jean-Claude

    1987-09-01

    This manual for the user of the finite-element code ATILA provides instruction for entering information and running the code on a VAX computer. The manual does not include the code. The finite element code ATILA has been specifically developed to aid the design of piezoelectric devices, mainly for sonar applications. Thus, it is able to perform the model analyses of both axisymmetrical and fully three-dimensional piezoelectric transducers. It can also provide their harmonic response under radiating conditions: nearfield and farfield pressure, transmitting voltage response, directivity pattern, electrical impedance, as well as displacement field, nodal plane positions, stress field and various stress criteria...Its accuracy and its ability to describe the physical behavior of various transducers (Tonpilz transducers, double headmass symmetrical length expanders, free flooded rings, flextensional transducers, bender bars, cylindrical and trilaminar hydrophones...) have been checked by modelling more than twenty different structures and comparing numerical and experimental results.

  17. A model for reaction-assisted polymer dissolution in LIGA.

    SciTech Connect

    Larson, Richard S.

    2004-05-01

    A new chemically-oriented mathematical model for the development step of the LIGA process is presented. The key assumption is that the developer can react with the polymeric resist material in order to increase the solubility of the latter, thereby partially overcoming the need to reduce the polymer size. The ease with which this reaction takes place is assumed to be determined by the number of side chain scissions that occur during the x-ray exposure phase of the process. The dynamics of the dissolution process are simulated by solving the reaction-diffusion equations for this three-component, two-phase system, the three species being the unreacted and reacted polymers and the solvent. The mass fluxes are described by the multicomponent diffusion (Stefan-Maxwell) equations, and the chemical potentials are assumed to be given by the Flory-Huggins theory. Sample calculations are used to determine the dependence of the dissolution rate on key system parameters such as the reaction rate constant, polymer size, solid-phase diffusivity, and Flory-Huggins interaction parameters. A simple photochemistry model is used to relate the reaction rate constant and the polymer size to the absorbed x-ray dose. The resulting formula for the dissolution rate as a function of dose and temperature is ?t to an extensive experimental data base in order to evaluate a set of unknown global parameters. The results suggest that reaction-assisted dissolution is very important at low doses and low temperatures, the solubility of the unreacted polymer being too small for it to be dissolved at an appreciable rate. However, at high doses or at higher temperatures, the solubility is such that the reaction is no longer needed, and dissolution can take place via the conventional route. These results provide an explanation for the observed dependences of both the dissolution rate and its activation energy on the absorbed dose.

  18. Description of codes and models to be used in risk assessment

    SciTech Connect

    Not Available

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab.

  19. Reaction-diffusion processes and metapopulation models on duplex networks

    NASA Astrophysics Data System (ADS)

    Xuan, Qi; Du, Fang; Yu, Li; Chen, Guanrong

    2013-03-01

    Reaction-diffusion processes, used to model various spatially distributed dynamics such as epidemics, have been studied mostly on regular lattices or complex networks with simplex links that are identical and invariant in transferring different kinds of particles. However, in many self-organized systems, different particles may have their own private channels to keep their purities. Such division of links often significantly influences the underlying reaction-diffusion dynamics and thus needs to be carefully investigated. This article studies a special reaction-diffusion process, named susceptible-infected-susceptible (SIS) dynamics, given by the reaction steps β→α and α+β→2β, on duplex networks where links are classified into two groups: α and β links used to transfer α and β particles, which, along with the corresponding nodes, consist of an α subnetwork and a β subnetwork, respectively. It is found that the critical point of particle density to sustain reaction activity is independent of the network topology if there is no correlation between the degree sequences of the two subnetworks, and this critical value is suppressed or extended if the two degree sequences are positively or negatively correlated, respectively. Based on the obtained results, it is predicted that epidemic spreading may be promoted on positive correlated traffic networks but may be suppressed on networks with modules composed of different types of diffusion links.

  20. Light radioactive nuclei capture reactions with phenomenological potential models

    SciTech Connect

    Guimaraes, V.; Bertulani, C. A.

    2010-05-21

    Light radioactive nuclei play an important role in many astrophysical environments. Due to very low cross sections of some neutron and proton capture reactions by these radioactive nuclei at energies of astrophysical interest, direct laboratory measurements are very difficult. For radioactive nuclei such as {sup 8}Li and {sup 8}B, the direct measurement of neutron capture reactions is impossible. Indirect methods have been applied to overcome these difficulties. In this work we will report on the results and discussion of phenomenological potential models used to determine some proton and neutron capture reactions. As a test we show the results for the {sup 16}O(p,gamma){sup 17}F{sub gs}(5/2{sup +}) and {sup 16}O(p,gamma){sup 17}F{sub ex}(1/2{sup +}) capture reactions. We also computed the nucleosynthesis cross sections for the {sup 7}Li(n,gamma){sup 8}Li{sub gs}, {sup 8}Li(n,gamma){sup 9}Li{sub gs} and {sup 8}B(p,gamma){sup 9}C{sub gs} capture reactions.

  1. A model study of sequential enzyme reactions and electrostatic channeling.

    PubMed

    Eun, Changsun; Kekenes-Huskey, Peter M; Metzger, Vincent T; McCammon, J Andrew

    2014-03-14

    We study models of two sequential enzyme-catalyzed reactions as a basic functional building block for coupled biochemical networks. We investigate the influence of enzyme distributions and long-range molecular interactions on reaction kinetics, which have been exploited in biological systems to maximize metabolic efficiency and signaling effects. Specifically, we examine how the maximal rate of product generation in a series of sequential reactions is dependent on the enzyme distribution and the electrostatic composition of its participant enzymes and substrates. We find that close proximity between enzymes does not guarantee optimal reaction rates, as the benefit of decreasing enzyme separation is countered by the volume excluded by adjacent enzymes. We further quantify the extent to which the electrostatic potential increases the efficiency of transferring substrate between enzymes, which supports the existence of electrostatic channeling in nature. Here, a major finding is that the role of attractive electrostatic interactions in confining intermediate substrates in the vicinity of the enzymes can contribute more to net reactive throughput than the directional properties of the electrostatic fields. These findings shed light on the interplay of long-range interactions and enzyme distributions in coupled enzyme-catalyzed reactions, and their influence on signaling in biological systems.

  2. Statistical model analysis of α -induced reaction cross sections of 64Zn at low energies

    NASA Astrophysics Data System (ADS)

    Mohr, P.; Gyürky, Gy.; Fülöp, Zs.

    2017-01-01

    Background: α -nucleus potentials play an essential role in the calculation of α -induced reaction cross sections at low energies in the statistical model. Uncertainties of these calculations are related to ambiguities in the adjustment of the potential parameters to experimental elastic scattering angular distributions (typically at higher energies) and to the energy dependence of the effective α -nucleus potentials. Purpose: The present work studies cross sections of α -induced reactions for 64Zn at low energies and their dependence on the chosen input parameters of the statistical model calculations. The new experimental data from the recent Atomki experiments allow for a χ2-based estimate of the uncertainties of calculated cross sections at very low energies. Method: Recently measured data for the (α ,γ ), (α ,n ), and (α ,p ) reactions on 64Zn are compared to calculations in the statistical model. A survey of the parameter space of the widely used computer code talys is given, and the properties of the obtained χ2 landscape are discussed. Results: The best fit to the experimental data at low energies shows χ2/F ≈7.7 per data point, which corresponds to an average deviation of about 30% between the best fit and the experimental data. Several combinations of the various ingredients of the statistical model are able to reach a reasonably small χ2/F , not exceeding the best-fit result by more than a factor of 2. Conclusions: The present experimental data for 64Zn in combination with the statistical model calculations allow us to constrain the astrophysical reaction rate within about a factor of 2. However, the significant excess of χ2/F of the best fit from unity demands further improvement of the statistical model calculations and, in particular, the α -nucleus potential.

  3. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  4. Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples

    SciTech Connect

    Lorence, L.J. Jr.; Beutler, D.E.

    1997-09-01

    This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.

  5. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model

    ERIC Educational Resources Information Center

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan

    2012-01-01

    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  6. Modeling reaction fronts of separated condensed phase reactants

    NASA Astrophysics Data System (ADS)

    Koundinyan, Sushilkumar; Stewart, D. Scott; Matalon, Moshe

    2017-01-01

    We present a Gibbs free energy approach to modeling reaction fronts in condensed phase reactive materials. The current interest is in chemical reactions of condensed phase reactants that are initially separated. In energetic materials such reactions are observed to occur extremely fast and at relatively sharp fronts. The condensed phase combustion process differs in several aspects from classical gaseous combustion due to the disparity between the characteristic thermal conductivity length and the mass diffusion lengths and a volume, temperature, stress, mass fraction equation of state that principally depends only on the component reference volumes and the current mixture composition. To retain a simple planar configuration, we consider the two reactants, in solid phase, are in motion towards each other characterized by counter-flow geometry. We apply the model to a simplified Titanium-Boron system and present the analysis of reaction zone length for various strain rates. The numerical results are validated with asymptotic approximations at the Burke-Schumann (complete combustion) limit.

  7. Modeling reaction fronts of separated condensed phase reactants

    NASA Astrophysics Data System (ADS)

    Koundinyan, Sushilkumar; Matalon, Moshe; Stewart, D. Scott; Bdzil, John

    2015-06-01

    We present a Gibbs free energy approach to modeling reaction fronts in condensed phase reactive materials. The current interest is in chemical reactions of condensed phase reactants that are initially separated. In energetic materials such reactions are observed to occur extremely fast and at relatively sharp fronts. The solid-to-solid combustion process differs in several aspects from classical gaseous combustion due to the disparity between the characteristic thermal conductivity length and the mass diffusion lengths and a volume, temperature, stress, mass fraction equation of state that principally depends only on the component reference volumes and the current mixture composition. To retain a simple planar configuration, we consider the two reactants, in solid phase, are in motion towards each other characterized by counter-flow geometry. We apply the model to a simplified Titanium-Boron system and present the analysis of reaction zone length for various strain rates. The numerical results are validated with asymptotic approximations at the Burke-Schumann limit. Supported by HDTRA1-10-1-0020 (DTRA), AF Sub MO C00039417-1 (AFOSR/TRE).

  8. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  9. Modeling Spatiotemporal Contextual Dynamics with Sparse-Coded Transfer Learning

    DTIC Science & Technology

    2012-08-08

    this work is the idea that causality of action units can be encoded as a Probabilistic Suffix Tree (PST) with variable temporal scale, while the...it can encode richer and more flexible causal relationships. Here, we model complex human activity as a Probabilistic Suffix Tree (PST) which

  10. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation Modelling

    DTIC Science & Technology

    2009-10-01

    Beattie - Bridgeman Virial expansion The above equations are suitable for moderate pressures and are usually based on either empirical constants...CR 2010-013 October 2009 A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation...Defence R&D Canada. A Review of Equation of State Models, Chemical Equilibrium Calculations and CERV Code Requirements for SHS Detonation

  11. The Nuremberg Code subverts human health and safety by requiring animal modeling

    PubMed Central

    2012-01-01

    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234

  12. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  13. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  14. Reduced Fast Ion Transport Model For The Tokamak Transport Code TRANSP

    SciTech Connect

    Podesta,, Mario; Gorelenkova, Marina; White, Roscoe

    2014-02-28

    Fast ion transport models presently implemented in the tokamak transport code TRANSP [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, CEC Brussels, 1 , 19 (1980)] are not capturing important aspects of the physics associated with resonant transport caused by instabilities such as Toroidal Alfv en Eigenmodes (TAEs). This work describes the implementation of a fast ion transport model consistent with the basic mechanisms of resonant mode-particle interaction. The model is formulated in terms of a probability distribution function for the particle's steps in phase space, which is consistent with the MonteCarlo approach used in TRANSP. The proposed model is based on the analysis of fast ion response to TAE modes through the ORBIT code [R. B. White et al., Phys. Fluids 27 , 2455 (1984)], but it can be generalized to higher frequency modes (e.g. Compressional and Global Alfv en Eigenmodes) and to other numerical codes or theories.

  15. Potential capabilities of Reynolds stress turbulence model in the COMMIX-RSM code

    NASA Technical Reports Server (NTRS)

    Chang, F. C.; Bottoni, M.

    1994-01-01

    A Reynolds stress turbulence model has been implemented in the COMMIX code, together with transport equations describing turbulent heat fluxes, variance of temperature fluctuations, and dissipation of turbulence kinetic energy. The model has been verified partially by simulating homogeneous turbulent shear flow, and stable and unstable stratified shear flows with strong buoyancy-suppressing or enhancing turbulence. This article outlines the model, explains the verifications performed thus far, and discusses potential applications of the COMMIX-RSM code in several domains, including, but not limited to, analysis of thermal striping in engineering systems, simulation of turbulence in combustors, and predictions of bubbly and particulate flows.

  16. Modeling Code Is Helping Cleveland Develop New Products

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Master Builders, Inc., is a 350-person company in Cleveland, Ohio, that develops and markets specialty chemicals for the construction industry. Developing new products involves creating many potential samples and running numerous tests to characterize the samples' performance. Company engineers enlisted NASA's help to replace cumbersome physical testing with computer modeling of the samples' behavior. Since the NASA Lewis Research Center's Structures Division develops mathematical models and associated computation tools to analyze the deformation and failure of composite materials, its researchers began a two-phase effort to modify Lewis' Integrated Composite Analyzer (ICAN) software for Master Builders' use. Phase I has been completed, and Master Builders is pleased with the results. The company is now working to begin implementation of Phase II.

  17. Estimation of relative biological effectiveness for boron neutron capture therapy using the PHITS code coupled with a microdosimetric kinetic model.

    PubMed

    Horiguchi, Hironori; Sato, Tatsuhiko; Kumada, Hiroaki; Yamamoto, Tetsuya; Sakae, Takeji

    2015-03-01

    The absorbed doses deposited by boron neutron capture therapy (BNCT) can be categorized into four components: α and (7)Li particles from the (10)B(n, α)(7)Li reaction, 0.54-MeV protons from the (14)N(n, p)(14)C reaction, the recoiled protons from the (1)H(n, n) (1)H reaction, and photons from the neutron beam and (1)H(n, γ)(2)H reaction. For evaluating the irradiation effect in tumors and the surrounding normal tissues in BNCT, it is of great importance to estimate the relative biological effectiveness (RBE) for each dose component in the same framework. We have, therefore, established a new method for estimating the RBE of all BNCT dose components on the basis of the microdosimetric kinetic model. This method employs the probability density of lineal energy, y, in a subcellular structure as the index for expressing RBE, which can be calculated using the microdosimetric function implemented in the particle transport simulation code (PHITS). The accuracy of this method was tested by comparing the calculated RBE values with corresponding measured data in a water phantom irradiated with an epithermal neutron beam. The calculation technique developed in this study will be useful for biological dose estimation in treatment planning for BNCT.

  18. Estimation of relative biological effectiveness for boron neutron capture therapy using the PHITS code coupled with a microdosimetric kinetic model

    PubMed Central

    Horiguchi, Hironori; Sato, Tatsuhiko; Kumada, Hiroaki; Yamamoto, Tetsuya; Sakae, Takeji

    2015-01-01

    The absorbed doses deposited by boron neutron capture therapy (BNCT) can be categorized into four components: α and 7Li particles from the 10B(n, α)7Li reaction, 0.54-MeV protons from the 14N(n, p)14C reaction, the recoiled protons from the 1H(n, n) 1H reaction, and photons from the neutron beam and 1H(n, γ)2H reaction. For evaluating the irradiation effect in tumors and the surrounding normal tissues in BNCT, it is of great importance to estimate the relative biological effectiveness (RBE) for each dose component in the same framework. We have, therefore, established a new method for estimating the RBE of all BNCT dose components on the basis of the microdosimetric kinetic model. This method employs the probability density of lineal energy, y, in a subcellular structure as the index for expressing RBE, which can be calculated using the microdosimetric function implemented in the particle transport simulation code (PHITS). The accuracy of this method was tested by comparing the calculated RBE values with corresponding measured data in a water phantom irradiated with an epithermal neutron beam. The calculation technique developed in this study will be useful for biological dose estimation in treatment planning for BNCT. PMID:25428243

  19. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  20. Boundary effects in a surface reaction model for CO oxidation

    NASA Astrophysics Data System (ADS)

    Brosilow, Benjamin J.; Gulari, Erdogan; Ziff, Robert M.

    1993-01-01

    The surface reaction model of Ziff, Gulari, and Barshad (ZGB) is investigated on finite systems with ``hard'' oxygen boundary conditions. The rate of production of CO2 is calculated as a function of y and system size. When the rate of CO adsorption y is above the first-order transition value y2, the reactive region is found to extend into the system a distance ξ which scales as (y-y2)-0.40 when y→y2.

  1. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  2. Diffusion-controlled reactions modeling in Geant4-DNA

    SciTech Connect

    Karamitros, M.; Luan, S.; Bernal, M.A.; Allison, J.; Baldacchino, G.; Davidkova, M.; Francis, Z.; Friedland, W.; Ivantchenko, V.; Ivantchenko, A.; Mantero, A.; Nieminem, P.; Santin, G.; Tran, H.N.; Stepan, V.; Incerti, S.

    2014-10-01

    Context Under irradiation, a biological system undergoes a cascade of chemical reactions that can lead to an alteration of its normal operation. There are different types of radiation and many competing reactions. As a result the kinetics of chemical species is extremely complex. The simulation becomes then a powerful tool which, by describing the basic principles of chemical reactions, can reveal the dynamics of the macroscopic system. To understand the dynamics of biological systems under radiation, since the 80s there have been on-going efforts carried out by several research groups to establish a mechanistic model that consists in describing all the physical, chemical and biological phenomena following the irradiation of single cells. This approach is generally divided into a succession of stages that follow each other in time: (1) the physical stage, where the ionizing particles interact directly with the biological material; (2) the physico-chemical stage, where the targeted molecules release their energy by dissociating, creating new chemical species; (3) the chemical stage, where the new chemical species interact with each other or with the biomolecules; (4) the biological stage, where the repairing mechanisms of the cell come into play. This article focuses on the modeling of the chemical stage. Method This article presents a general method of speeding-up chemical reaction simulations in fluids based on the Smoluchowski equation and Monte-Carlo methods, where all molecules are explicitly simulated and the solvent is treated as a continuum. The model describes diffusion-controlled reactions. This method has been implemented in Geant4-DNA. The keys to the new algorithm include: (1) the combination of a method to compute time steps dynamically with a Brownian bridge process to account for chemical reactions, which avoids costly fixed time step simulations; (2) a k–d tree data structure for quickly locating, for a given molecule, its closest reactants. The

  3. Diffusion-controlled reactions modeling in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Karamitros, M.; Luan, S.; Bernal, M. A.; Allison, J.; Baldacchino, G.; Davidkova, M.; Francis, Z.; Friedland, W.; Ivantchenko, V.; Ivantchenko, A.; Mantero, A.; Nieminem, P.; Santin, G.; Tran, H. N.; Stepan, V.; Incerti, S.

    2014-10-01

    Context Under irradiation, a biological system undergoes a cascade of chemical reactions that can lead to an alteration of its normal operation. There are different types of radiation and many competing reactions. As a result the kinetics of chemical species is extremely complex. The simulation becomes then a powerful tool which, by describing the basic principles of chemical reactions, can reveal the dynamics of the macroscopic system. To understand the dynamics of biological systems under radiation, since the 80s there have been on-going efforts carried out by several research groups to establish a mechanistic model that consists in describing all the physical, chemical and biological phenomena following the irradiation of single cells. This approach is generally divided into a succession of stages that follow each other in time: (1) the physical stage, where the ionizing particles interact directly with the biological material; (2) the physico-chemical stage, where the targeted molecules release their energy by dissociating, creating new chemical species; (3) the chemical stage, where the new chemical species interact with each other or with the biomolecules; (4) the biological stage, where the repairing mechanisms of the cell come into play. This article focuses on the modeling of the chemical stage. Method This article presents a general method of speeding-up chemical reaction simulations in fluids based on the Smoluchowski equation and Monte-Carlo methods, where all molecules are explicitly simulated and the solvent is treated as a continuum. The model describes diffusion-controlled reactions. This method has been implemented in Geant4-DNA. The keys to the new algorithm include: (1) the combination of a method to compute time steps dynamically with a Brownian bridge process to account for chemical reactions, which avoids costly fixed time step simulations; (2) a k-d tree data structure for quickly locating, for a given molecule, its closest reactants. The

  4. The modelling of wall condensation with noncondensable gases for the containment codes

    SciTech Connect

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H.

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  5. Model for transport and reaction of defects and carriers within displacement cascades in gallium arsenide

    SciTech Connect

    Wampler, William R. Myers, Samuel M.

    2015-01-28

    A model is presented for recombination of charge carriers at evolving displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with the details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers, and defects within a representative spherically symmetric cluster of defects. The initial radial defect profiles within the cluster were determined through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to displacement damage from energetic particle irradiation.

  6. Model for transport and reaction of defects and carriers within displacement cascades in gallium arsenide

    NASA Astrophysics Data System (ADS)

    Wampler, William R.; Myers, Samuel M.

    2015-01-01

    A model is presented for recombination of charge carriers at evolving displacement damage in gallium arsenide, which includes clustering of the defects in atomic displacement cascades produced by neutron or ion irradiation. The carrier recombination model is based on an atomistic description of capture and emission of carriers by the defects with time evolution resulting from the migration and reaction of the defects. The physics and equations on which the model is based are presented, along with the details of the numerical methods used for their solution. The model uses a continuum description of diffusion, field-drift and reaction of carriers, and defects within a representative spherically symmetric cluster of defects. The initial radial defect profiles within the cluster were determined through pair-correlation-function analysis of the spatial distribution of defects obtained from the binary-collision code MARLOWE, using recoil energies for fission neutrons. Properties of the defects are discussed and values for their parameters are given, many of which were obtained from density functional theory. The model provides a basis for predicting the transient response of III-V heterojunction bipolar transistors to displacement damage from energetic particle irradiation.

  7. Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I

    NASA Technical Reports Server (NTRS)

    Wey, Thomas Changju; Liu, Nan-suey

    2011-01-01

    This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.

  8. Comparison of current state residential energy codes with the 1992 model energy code for one- and two-family dwellings; 1994

    SciTech Connect

    Klevgard, L.A.; Taylor, Z.T.; Lucas, R.G.

    1995-01-01

    This report is one in a series of documents describing research activities in support of the US Department of Energy (DOE) Building Energy Codes Program. The Pacific Northwest Laboratory (PNL) leads the program for DOE. The goal of the program is to develop and support the adopting, implementation, and enforcement of Federal, State, and Local energy codes for new buildings. The program approach to meeting the goal is to initiate and manage individual research and standards and guidelines development efforts that are planned and conducted in cooperation with representatives from throughout the buildings community. Projects under way involve practicing architects and engineers, professional societies and code organizations, industry representatives, and researchers from the private sector and national laboratories. Research results and technical justifications for standards criteria are provided to standards development and model code organizations and to Federal, State, and local jurisdictions as a basis to update their codes and standards. This effort helps to ensure that building standards incorporate the latest research results to achieve maximum energy savings in new buildings, yet remain responsive to the needs of the affected professions, organizations, and jurisdictions. Also supported are the implementation, deployment, and use of energy-efficient codes and standards. This report documents findings from an analysis conducted by PNL of the State`s building codes to determine if the codes meet or exceed the 1992 MEC energy efficiency requirements (CABO 1992a).

  9. Comparison of dust transport modelling codes in a tokamak plasma

    NASA Astrophysics Data System (ADS)

    Uccello, Andrea; Gervasini, Gabriele; Ghezzi, Francesco; Lazzaro, Enzo; Bacharis, Minas; Flanagan, Joanne; Matthews, Guy; Järvinen, Aaro; Sertoli, Marco

    2016-10-01

    Since the installation on the Joint European Torus of the ITER-like Wall (ILW), intense radiation spikes have been observed, especially in the discharges following a disruption, and have been associated with possible sudden injection of tungsten (W) impurities consequent to full ablation of W dust particles. The problem of dust production, mobilization, and interaction both with the plasma and the vessel tiles is therefore of great concern and requires the setting up of dedicated and validated numerical modeling tools. Among these, a useful role is played by the dust trajectory calculators, which can present in a relatively clear way the qualitative and quantitative description of the mobilization and fate of selected bunches of dust grains.

  10. Flux extrapolation models used in the DOT IV discrete ordinates neutron transport code

    SciTech Connect

    Tomlinson, E.T.; Rhoades, W.A.; Engle, W.W. Jr.

    1980-05-01

    The DOT IV code solves the Boltzmann transport equation in two dimensions using the method of discrete ordinates. Special techniques have been incorporated in this code to mitigate the effects of flux extrapolation error in space meshes of practical size. This report presents the flux extrapolation models as they appear in DOT IV. A sample problem is also presented to illustrate the effects of the various models on the resultant flux. Convergence of the various models to a single result as the mesh is refined is also examined. A detailed comparison with the widely used TWOTRAN II code is reported. The features which cause DOT and TWOTRAN to differ in the converged results are completely observed and explained.

  11. Modeling Reaction Control System Effects on Mars Odyssey

    NASA Technical Reports Server (NTRS)

    Hanna, Jill L.; Chavis, Zachary Q.; Wilmoth, Richard G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    During the Mars 2001 Odyssey aerobraking mission, NASA Langley Research Center performed 6 degree of freedom (6-DOF) simulations to determine rotational motion of the spacecraft. The main objective of this study was to assess the reaction control system models and their effects on the atmospheric flight of Odyssey. Based on these models, a comparison was made between data derived from flight measurements to simulated rotational motion of the spacecraft during aerobraking at Mars. The differences between the simulation and flight derived Odyssey data were then used to adjust the aerodynamic parameters to achieve a better correlation.

  12. An analytical model of gene evolution with 9 mutation parameters: an application to the amino acids coded by the common circular code.

    PubMed

    Michel, Christian J

    2007-02-01

    We develop here an analytical evolutionary model based on a trinucleotide mutation matrix 64 x 64 with nine substitution parameters associated with the three types of substitutions in the three trinucleotide sites. It generalizes the previous models based on the nucleotide mutation matrices 4 x 4 and the trinucleotide mutation matrix 64 x 64 with three and six parameters. It determines at some time t the exact occurrence probabilities of trinucleotides mutating randomly according to these nine substitution parameters. An application of this model allows an evolutionary study of the common circular code [Formula: see text] of eukaryotes and prokaryotes and its 12 coded amino acids. The main property of this code [Formula: see text] is the retrieval of the reading frames in genes, both locally, i.e. anywhere in genes and in particular without a start codon, and automatically with a window of a few nucleotides. However, since its identification in 1996, amino acid information coded by [Formula: see text] has never been studied. Very unexpectedly, this evolutionary model demonstrates that random substitutions in this code [Formula: see text] and with particular values for the nine substitutions parameters retrieve after a certain time of evolution a frequency distribution of these 12 amino acids very close to the one coded by the actual genes.

  13. Transport Corrections in Nodal Diffusion Codes for HTR Modeling

    SciTech Connect

    Abderrafi M. Ougouag; Frederick N. Gleicher

    2010-08-01

    The cores and reflectors of High Temperature Reactors (HTRs) of the Next Generation Nuclear Plant (NGNP) type are dominantly diffusive media from the point of view of behavior of the neutrons and their migration between the various structures of the reactor. This means that neutron diffusion theory is sufficient for modeling most features of such reactors and transport theory may not be needed for most applications. Of course, the above statement assumes the availability of homogenized diffusion theory data. The statement is true for most situations but not all. Two features of NGNP-type HTRs require that the diffusion theory-based solution be corrected for local transport effects. These two cases are the treatment of burnable poisons (BP) in the case of the prismatic block reactors and, for both pebble bed reactor (PBR) and prismatic block reactor (PMR) designs, that of control rods (CR) embedded in non-multiplying regions near the interface between fueled zones and said non-multiplying zones. The need for transport correction arises because diffusion theory-based solutions appear not to provide sufficient fidelity in these situations.

  14. A Discrete Model to Study Reaction-Diffusion-Mechanics Systems

    PubMed Central

    Weise, Louis D.; Nash, Martyn P.; Panfilov, Alexander V.

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects. PMID:21804911

  15. A discrete model to study reaction-diffusion-mechanics systems.

    PubMed

    Weise, Louis D; Nash, Martyn P; Panfilov, Alexander V

    2011-01-01

    This article introduces a discrete reaction-diffusion-mechanics (dRDM) model to study the effects of deformation on reaction-diffusion (RD) processes. The dRDM framework employs a FitzHugh-Nagumo type RD model coupled to a mass-lattice model, that undergoes finite deformations. The dRDM model describes a material whose elastic properties are described by a generalized Hooke's law for finite deformations (Seth material). Numerically, the dRDM approach combines a finite difference approach for the RD equations with a Verlet integration scheme for the equations of the mass-lattice system. Using this framework results were reproduced on self-organized pacemaking activity that have been previously found with a continuous RD mechanics model. Mechanisms that determine the period of pacemakers and its dependency on the medium size are identified. Finally it is shown how the drift direction of pacemakers in RDM systems is related to the spatial distribution of deformation and curvature effects.

  16. Trichloramine Removal with Activated Carbon Is Governed by Two Reductive Reactions: A Theoretical Approach with Diffusion-Reaction Models.

    PubMed

    Matsushita, Taku; Matsui, Yoshihiko; Ikekame, Shohei; Sakuma, Miki; Shirasaki, Nobutaka

    2017-04-06

    Mechanisms underlying trichloramine removal with activated carbon treatment were proven by batch experiments and theoretical analysis with diffusion-reaction models. The observed values of trichloramine and free chlorine were explained only by the model in which (1) both trichloramine and free chlorine were involved as reactants, (2) the removals of reactants were affected both by the intraparticle diffusion and by the reaction with activated carbon, and (3) trichloramine decomposition was governed by two distinct reductive reactions. One reductive reaction was expressed as a first-order reaction: the reductive reaction of trichloramine with the basal plane of PAC, which consists of graphene sheets. The other reaction was expressed as a second-order reaction: the reductive reaction of trichloramine with active functional groups located on the edge of the basal plane. Free chlorine competitively reacted with both the basal plane and the active functional groups. The fact that the model prediction succeeded even in experiments with different activated carbon doses, with different initial trichloramine concentrations, and with different sizes of activated carbon particles clearly proved that the mechanisms described in the model were reasonable for explaining trichloramine removal with activated carbon treatment.

  17. Transalkylation reactions in fossil fuels and related model compounds

    SciTech Connect

    Farcasiu, M.; Forbus, T.R.; LaPierre, R.B.

    1983-02-01

    The alkyl substituents of high molecular weight polycyclic aromatic constituents of petroleum residues are transferable to exogenous monocyclic aromatics (benzene, toluene, o-xylene, etc.) by acid catalyzed (CF/sub 3/SO/sub 3/H) Friedel Crafts transalkylation. Analysis (GC-MS) of the volatile alkylated monocyclic aromatic products provides a method for the determination of the alkyl group content/structure of the starting fossil fuel mixture. Both model systems, using alkylated naphthalenes, phenanthrenes, pyrenes and dibenzothiophenes and demineralized shale oil or petroleum resid were studied. The model studies (alkyl chain length 2-10 carbons) revealed the following reaction pathways to predominate: (1) transalkylation rates/equilibria are independent of chain length; (2) n-alkyl groups are transfered without rearrangement or fragmentation; (3) reaction rate depends upon the aromatic moiety; (4) formation of dixylylmethanes via benzyl carbenium ions is significant (12 to 25% of product; and (5) significant minor products at longer reaction times are alkyl tetralins, tetralins, napthalenes and alkylated acceptors having a chain length reduced by (-CH/sub 2/-)/sub 4/.

  18. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  19. TOPICAL REVIEW: The CRONOS suite of codes for integrated tokamak modelling

    NASA Astrophysics Data System (ADS)

    Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Schneider, M.; Garcia, J.; Giruzzi, G.; Huynh, P.; Aniel, T.; Albajar, F.; Ané, J. M.; Bécoulet, A.; Bourdelle, C.; Casati, A.; Colas, L.; Decker, J.; Dumont, R.; Eriksson, L. G.; Garbet, X.; Guirlet, R.; Hertout, P.; Hoang, G. T.; Houlberg, W.; Huysmans, G.; Joffrin, E.; Kim, S. H.; Köchl, F.; Lister, J.; Litaudon, X.; Maget, P.; Masset, R.; Pégourié, B.; Peysson, Y.; Thomas, P.; Tsitrone, E.; Turco, F.

    2010-04-01

    CRONOS is a suite of numerical codes for the predictive/interpretative simulation of a full tokamak discharge. It integrates, in a modular structure, a 1D transport solver with general 2D magnetic equilibria, several heat, particle and impurities transport models, as well as heat, particle and momentum sources. This paper gives a first comprehensive description of the CRONOS suite: overall structure of the code, main available models, details on the simulation workflow and numerical implementation. Some examples of applications to the analysis of experimental discharges and the predictions of ITER scenarios are also given.

  20. Coupled Disturbance Modelling And Validation Of A Reaction Wheel Model

    NASA Astrophysics Data System (ADS)

    Zhang, Zhe; Aglietti, Gugliemo S.

    2012-07-01

    Microvibrations of a RWA are usually studied in either hard-mounted or coupled conditions, although coupled wheel-structure disturbances are more representative than the hard-mounted disturbances. The coupled analysis method of the wheel-structure is not as well developed as the hard-mounted one. A coupled disturbance analysis method is proposed in this paper. One of the most important factors in coupled disturbance analysis - the accelerance or dynamic mass of the wheel is measured and results are validated with an equivalent FE model. The wheel hard-mounted disturbances are also measured from a vibration measurement platform particularly designed for this study. Wheel structural modes are solved from its analytical disturbance model and validated with the test results. The wheel-speed dependent accelerance analysis method is proposed.

  1. Modified-Gravity-GADGET: a new code for cosmological hydrodynamical simulations of modified gravity models

    NASA Astrophysics Data System (ADS)

    Puchwein, Ewald; Baldi, Marco; Springel, Volker

    2013-11-01

    We present a new massively parallel code for N-body and cosmological hydrodynamical simulations of modified gravity models. The code employs a multigrid-accelerated Newton-Gauss-Seidel relaxation solver on an adaptive mesh to efficiently solve for perturbations in the scalar degree of freedom of the modified gravity model. As this new algorithm is implemented as a module for the P-GADGET3 code, it can at the same time follow the baryonic physics included in P-GADGET3, such as hydrodynamics, radiative cooling and star formation. We demonstrate that the code works reliably by applying it to simple test problems that can be solved analytically, as well as by comparing cosmological simulations to results from the literature. Using the new code, we perform the first non-radiative and radiative cosmological hydrodynamical simulations of an f (R)-gravity model. We also discuss the impact of active galactic nucleus feedback on the matter power spectrum, as well as degeneracies between the influence of baryonic processes and modifications of gravity.

  2. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results

    NASA Astrophysics Data System (ADS)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas

    2017-01-01

    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  3. Carbon fragmentation measurements and validation of the Geant4 nuclear reaction models for hadrontherapy.

    PubMed

    De Napoli, M; Agodi, C; Battistoni, G; Blancato, A A; Cirrone, G A P; Cuttone, G; Giacoppo, F; Morone, M C; Nicolosi, D; Pandola, L; Patera, V; Raciti, G; Rapisarda, E; Romano, F; Sardina, D; Sarti, A; Sciubba, A; Scuderi, V; Sfienti, C; Tropea, S

    2012-11-21

    Nuclear fragmentation measurements are necessary when using heavy-ion beams in hadrontherapy to predict the effects of the ion nuclear interactions within the human body. Moreover, they are also fundamental to validate and improve the Monte Carlo codes for their use in planning tumor treatments. Nowadays, a very limited set of carbon fragmentation cross sections are being measured, and in particular, to our knowledge, no double-differential fragmentation cross sections at intermediate energies are available in the literature. In this work, we have measured the double-differential cross sections and the angular distributions of the secondary fragments produced in the (12)C fragmentation at 62 A MeV on a thin carbon target. The experimental data have been used to benchmark the prediction capability of the Geant4 Monte Carlo code at intermediate energies, where it was never tested before. In particular, we have compared the experimental data with the predictions of two Geant4 nuclear reaction models: the Binary Light Ions Cascade and the Quantum Molecular Dynamic. From the comparison, it has been observed that the Binary Light Ions Cascade approximates the angular distributions of the fragment production cross sections better than the Quantum Molecular Dynamic model. However, the discrepancies observed between the experimental data and the Monte Carlo simulations lead to the conclusion that the prediction capability of both models needs to be improved at intermediate energies.

  4. Carbon fragmentation measurements and validation of the Geant4 nuclear reaction models for hadrontherapy

    NASA Astrophysics Data System (ADS)

    De Napoli, M.; Agodi, C.; Battistoni, G.; Blancato, A. A.; Cirrone, G. A. P.; Cuttone, G.; Giacoppo, F.; Morone, M. C.; Nicolosi, D.; Pandola, L.; Patera, V.; Raciti, G.; Rapisarda, E.; Romano, F.; Sardina, D.; Sarti, A.; Sciubba, A.; Scuderi, V.; Sfienti, C.; Tropea, S.

    2012-11-01

    Nuclear fragmentation measurements are necessary when using heavy-ion beams in hadrontherapy to predict the effects of the ion nuclear interactions within the human body. Moreover, they are also fundamental to validate and improve the Monte Carlo codes for their use in planning tumor treatments. Nowadays, a very limited set of carbon fragmentation cross sections are being measured, and in particular, to our knowledge, no double-differential fragmentation cross sections at intermediate energies are available in the literature. In this work, we have measured the double-differential cross sections and the angular distributions of the secondary fragments produced in the 12C fragmentation at 62 A MeV on a thin carbon target. The experimental data have been used to benchmark the prediction capability of the Geant4 Monte Carlo code at intermediate energies, where it was never tested before. In particular, we have compared the experimental data with the predictions of two Geant4 nuclear reaction models: the Binary Light Ions Cascade and the Quantum Molecular Dynamic. From the comparison, it has been observed that the Binary Light Ions Cascade approximates the angular distributions of the fragment production cross sections better than the Quantum Molecular Dynamic model. However, the discrepancies observed between the experimental data and the Monte Carlo simulations lead to the conclusion that the prediction capability of both models needs to be improved at intermediate energies.

  5. Modelling charge transfer reactions with the frozen density embedding formalism

    SciTech Connect

    Pavanello, Michele; Neugebauer, Johannes

    2011-12-21

    The frozen density embedding (FDE) subsystem formulation of density-functional theory is a useful tool for studying charge transfer reactions. In this work charge-localized, diabatic states are generated directly with FDE and used to calculate electronic couplings of hole transfer reactions in two {pi}-stacked nucleobase dimers of B-DNA: 5{sup '}-GG-3{sup '} and 5{sup '}-GT-3{sup '}. The calculations rely on two assumptions: the two-state model, and a small differential overlap between donor and acceptor subsystem densities. The resulting electronic couplings agree well with benchmark values for those exchange-correlation functionals that contain a high percentage of exact exchange. Instead, when semilocal GGA functionals are used the electronic couplings are grossly overestimated.

  6. Homogeneous models for mechanisms of surface reactions: Propylene ammoxidation

    SciTech Connect

    Chan, D.M.T.; Nugent, W.A.; Fultz, W.C.; Rose, D.C.; Tulip, T.H.

    1987-04-01

    The proposed active sites on the catalyst surface in heterogeneous propylene ammoxidation have been successfully modelled by structurally characterized pinacolato W(VI) tert-butylimido complexes. These compounds exist as an equilibrating mixture of amine-bis(imido) and imido-bis(amido) complexes, the position of this equilibrium is dependent on the electronic nature of the glycolate ligand. Both of the C-N bond-forming reactions proposed in recent studies by Grasselli et al. (1) have been reproduced using discrete Group VI d{sup 0} organoimido complexes under mild conditions suitable for detailed mechanistic studies. These reactions are: (1) oxidative trapping of radicals at molybdenum imido sites, and (2) migration of the allyl group from oxygen to an imido nitrogen atom.

  7. Chemical reaction fouling model for single-phase heat transfer

    SciTech Connect

    Panchal, C.B.; Watkinson, A.P.

    1993-08-01

    A fouling model was developed on the premise that the chemical reaction for generation of precursor can take place in the bulk fluid, in the thermalboundary layer, or at the fluid/wall interface, depending upon the interactive effects of flu id dynamics, heat and mass transfer, and the controlling chemical reaction. The analysis was used to examine the experimental data for fouling deposition of polyperoxides produced by autoxidation of indene in kerosene. The effects of fluid and wall temperatures for two flow geometries were analyzed. The results showed that the relative effects of physical parameters on the fouling rate would differ for the three fouling mechanisms; therefore, it is important to identify the controlling mechanism in applying the closed-flow-loop data to industrial conditions.

  8. Modelling charge transfer reactions with the frozen density embedding formalism.

    PubMed

    Pavanello, Michele; Neugebauer, Johannes

    2011-12-21

    The frozen density embedding (FDE) subsystem formulation of density-functional theory is a useful tool for studying charge transfer reactions. In this work charge-localized, diabatic states are generated directly with FDE and used to calculate electronic couplings of hole transfer reactions in two π-stacked nucleobase dimers of B-DNA: 5'-GG-3' and 5'-GT-3'. The calculations rely on two assumptions: the two-state model, and a small differential overlap between donor and acceptor subsystem densities. The resulting electronic couplings agree well with benchmark values for those exchange-correlation functionals that contain a high percentage of exact exchange. Instead, when semilocal GGA functionals are used the electronic couplings are grossly overestimated.

  9. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code

    NASA Astrophysics Data System (ADS)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.

    2017-02-01

    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  10. The 2.5-Dimensional Photoionization Code ``PAN'' for Modeling of Axially Symmetric Nebulae: The Distinctive Features

    NASA Astrophysics Data System (ADS)

    Rokach, Oleg V.

    2005-11-01

    A multi-purpose spectrum synthesis code ``PAN'' (``Photoionized Axisymmetric Nebula'') is presented. The code allows computing of self-consistent steady-state models of morphologically-realistic axisymmetric gaseous, dust or gas+dust envelopes. Only the main features of the code ``PAN'' are enumerated here.

  11. A reaction-diffusion model of cytosolic hydrogen peroxide.

    PubMed

    Lim, Joseph B; Langford, Troy F; Huang, Beijing K; Deen, William M; Sikes, Hadley D

    2016-01-01

    As a signaling molecule in mammalian cells, hydrogen peroxide (H2O2) determines the thiol/disulfide oxidation state of several key proteins in the cytosol. Localization is a key concept in redox signaling; the concentrations of signaling molecules within the cell are expected to vary in time and in space in manner that is essential for function. However, as a simplification, all theoretical studies of intracellular hydrogen peroxide and many experimental studies to date have treated the cytosol as a well-mixed compartment. In this work, we incorporate our previously reported reduced kinetic model of the network of reactions that metabolize hydrogen peroxide in the cytosol into a model that explicitly treats diffusion along with reaction. We modeled a bolus addition experiment, solved the model analytically, and used the resulting equations to quantify the spatiotemporal variations in intracellular H2O2 that result from this kind of perturbation to the extracellular H2O2 concentration. We predict that micromolar bolus additions of H2O2 to suspensions of HeLa cells (0.8 × 10(9)cells/l) result in increases in the intracellular concentration that are localized near the membrane. These findings challenge the assumption that intracellular concentrations of H2O2 are increased uniformly throughout the cell during bolus addition experiments and provide a theoretical basis for differing phenotypic responses of cells to intracellular versus extracellular perturbations to H2O2 levels.

  12. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  13. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    SciTech Connect

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  14. A reaction-based paradigm to model reactive chemical transport in groundwater with general kinetic and equilibrium reactions.

    PubMed

    Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B

    2007-06-16

    This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.

  15. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  16. ICRCCM (InterComparison of Radiation Codes used in Climate Models) Phase 2: Verification and calibration of radiation codes in climate models

    SciTech Connect

    Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.

    1990-01-01

    Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). SPECTRE will establish an absolute standard against which to compare models, and will aim to remove the hidden variables'' (unknown humidities, aerosols, etc.) which radiation modelers have invoked to excuse disagreements with observation. The data to be collected during SPECTRE will form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used to climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. Overall, the project is proceeding much as had been anticipated in the original proposal. The most significant accomplishments to date include the completion of the analysis of the original ICRCCM calculations, the completion of the initial sensitivity analysis of the radiation calculations for the effects of uncertainties in the measurement of water vapor and temperature and the acquisition and testing of the inexpensive spectrometers for use in the field experiment. The sensitivity analysis and the spectrometer tests given us much more confidence that the field experiment will yield the quality of data necessary to make a significant tests of and improvements to radiative transfer models used in climate studies.

  17. Measurement and modeling of the cross sections for the reaction 230Th(3He,3n)230U

    NASA Astrophysics Data System (ADS)

    Morgenstern, A.; Abbas, K.; Simonelli, F.; Capote, R.; Sin, M.; Zielinska, B.; Bruchertseifer, F.; Apostolidis, C.

    2013-06-01

    230U and its daughter nuclide 226Th are promising therapeutic nuclides for application in targeted α therapy of cancer. We investigated the feasibility of producing 230U/226Th via irradiation of 230Th with 3He particles according to the reaction 230Th(3He,3n)230U. The experimental excitation function for this reaction is reported here. Cross sections were measured by using thin targets of 230Th prepared by electrodeposition, and 230U yields were analyzed by using α spectrometry. Beam intensities were obtained via monitor reactions on aluminum foils by using high-resolution γ spectrometry and International Atomic Energy Agency recommended cross sections. Incident particle energies were calculated by using the srim-2003 code. The experimental cross sections for the reaction 230Th(3He,3n)230U are in good agreement with model calculations by the empire-3 code once breakup and transfer reactions are properly considered in the incident channel. The obtained cross sections are too low to allow for the production of 230U/226Th in clinically relevant levels.

  18. Generic reactive transport codes as flexible tools to integrate soil organic matter degradation models with water, transport and geochemistry in soils

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik; Gérard, Fréderic; Mayer, Uli; Simunek, Jirka; Leterme, Bertrand

    2016-04-01

    A large number of organic matter degradation, CO2 transport and dissolved organic matter models have been developed during the last decades. However, organic matter degradation models are in many cases strictly hard-coded in terms of organic pools, degradation kinetics and dependency on environmental variables. The scientific input of the model user is typically limited to the adjustment of input parameters. In addition, the coupling with geochemical soil processes including aqueous speciation, pH-dependent sorption and colloid-facilitated transport are not incorporated in many of these models, strongly limiting the scope of their application. Furthermore, the most comprehensive organic matter degradation models are combined with simplified representations of flow and transport processes in the soil system. We illustrate the capability of generic reactive transport codes to overcome these shortcomings. The formulations of reactive transport codes include a physics-based continuum representation of flow and transport processes, while biogeochemical reactions can be described as equilibrium processes constrained by thermodynamic principles and/or kinetic reaction networks. The flexibility of these type of codes allows for straight-forward extension of reaction networks, permits the inclusion of new model components (e.g.: organic matter pools, rate equations, parameter dependency on environmental conditions) and in such a way facilitates an application-tailored implementation of organic matter degradation models and related processes. A numerical benchmark involving two reactive transport codes (HPx and MIN3P) demonstrates how the process-based simulation of transient variably saturated water flow (Richards equation), solute transport (advection-dispersion equation), heat transfer and diffusion in the gas phase can be combined with a flexible implementation of a soil organic matter degradation model. The benchmark includes the production of leachable organic matter

  19. Modeling the Reactions of Energetic Materials in the Condensed Phase

    SciTech Connect

    Fried, L E; Manaa, M R; Lewis, J P

    2003-12-03

    High explosive (HE) materials are unique for having a strong exothermic reactivity, which has made them desirable for both military and commercial applications. Although the history of HE materials is long, condensed-phase properties are poorly understood. Understanding the condensed-phase properties of HE materials is important for determining stability and performance. Information regarding HE material properties (for example, the physical, chemical, and mechanical behaviors of the constituents in plastic-bonded explosive, or PBX, formulations) is necessary in efficiently building the next generation of explosives as the quest for more powerful energetic materials (in terms of energy per volume) moves forward. In addition, understanding the reaction mechanisms has important ramifications in disposing of such materials safely and cheaply, as there exist vast stockpiles of HE materials with corresponding contamination of earth and groundwater at these sites, as well as a military testing sites The ability to model chemical reaction processes in condensed phase energetic materials is rapidly progressing. Chemical equilibrium modeling is a mature technique with some limitations. Progress in this area continues, but is hampered by a lack of knowledge of condensed phase reaction mechanisms and rates. Atomistic modeling is much more computationally intensive, and is currently limited to very short time scales. Nonetheless, this methodology promises to yield the first reliable insights into the condensed phase processes responsible for high explosive detonation. Further work is necessary to extend the timescales involved in atomistic simulations. Recent work in implementing thermostat methods appropriate to shocks may promise to overcome some of these difficulties. Most current work on energetic material reactivity assumes that electronically adiabatic processes dominate. The role of excited states is becoming clearer, however. These states are not accessible in perfect

  20. Systematic development of reduced reaction mechanisms for dynamic modeling

    NASA Technical Reports Server (NTRS)

    Frenklach, M.; Kailasanath, K.; Oran, E. S.

    1986-01-01

    A method for systematically developing a reduced chemical reaction mechanism for dynamic modeling of chemically reactive flows is presented. The method is based on the postulate that if a reduced reaction mechanism faithfully describes the time evolution of both thermal and chain reaction processes characteristic of a more complete mechanism, then the reduced mechanism will describe the chemical processes in a chemically reacting flow with approximately the same degree of accuracy. Here this postulate is tested by producing a series of mechanisms of reduced accuracy, which are derived from a full detailed mechanism for methane-oxygen combustion. These mechanisms were then tested in a series of reactive flow calculations in which a large-amplitude sinusoidal perturbation is applied to a system that is initially quiescent and whose temperature is high enough to start ignition processes. Comparison of the results for systems with and without convective flow show that this approach produces reduced mechanisms that are useful for calculations of explosions and detonations. Extensions and applicability to flames are discussed.

  1. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  2. Reaction-diffusion-branching models of stock price fluctuations

    NASA Astrophysics Data System (ADS)

    Tang, Lei-Han; Tian, Guang-Shan

    Several models of stock trading (Bak et al., Physica A 246 (1997) 430.) are analyzed in analogy with one-dimensional, two-species reaction-diffusion-branching processes. Using heuristic and scaling arguments, we show that the short-time market price variation is subdiffusive with a Hurst exponent H=1/4. Biased diffusion towards the market price and blind-eyed copying lead to crossovers to the empirically observed random-walk behavior ( H=1/2) at long times. The calculated crossover forms and diffusion constants are shown to agree well with simulation data.

  3. ZGB surface reaction model with high diffusion rates

    NASA Astrophysics Data System (ADS)

    Evans, J. W.

    1993-02-01

    The diffusionless ZGB (monomer-dimer) surface reaction model exhibits a discontinuous transition to a monomer-poisoned state when the fraction of monomer adsorption attempts exceeds 0.525. It has been claimed that this transition shifts to 2/3 with introduction of rapid diffusion of the monomer species, or of both species. We show this is not the case, 2/3 representing the spinodal rather than the transition point. For equal diffusion rates of both species, we find that the transition only shifts to 0.5951±0.0002.

  4. ZGB surface reaction model with high diffusion rates

    SciTech Connect

    Evans, J.W. )

    1993-02-01

    The diffusionless ZGB (monomer--dimer) surface reaction model exhibits a discontinuous transition to a monomer-poisoned state when the fraction of monomer adsorption attempts exceeds 0.525. It has been claimed that this transition shifts to 2/3 with introduction of rapid diffusion of the monomer species, or of both species. We show this is not the case, 2/3 representing the spinodal rather than the transition point. For equal diffusion rates of both species, we find that the transition only shifts to 0.5951[plus minus]0.0002.

  5. Golden rule kinetics of transfer reactions in condensed phase: The microscopic model of electron transfer reactions in disordered solid matrices

    NASA Astrophysics Data System (ADS)

    Basilevsky, M. V.; Odinokov, A. V.; Titov, S. V.; Mitina, E. A.

    2013-12-01

    postulated in the existing theories of the ET. Our alternative dynamic ET model for local modes immersed in the continuum harmonic medium is formulated for both classical and quantum regimes, and accounts explicitly for the mode/medium interaction. The kinetics of the energy exchange between the local ET subsystem and the surrounding environment essentially determine the total ET rate. The efficient computer code for rate computations is elaborated on. The computations are available for a wide range of system parameters, such as the temperature, external field, local mode frequency, and characteristics of mode/medium interaction. The relation of the present approach to the Marcus ET theory and to the quantum-statistical reaction rate theory [V. G. Levich and R. R. Dogonadze, Dokl. Akad. Nauk SSSR, Ser. Fiz. Khim. 124, 213 (1959); J. Ulstrup, Charge Transfer in Condensed Media (Springer, Berlin, 1979); M. Bixon and J. Jortner, Adv. Chem. Phys. 106, 35 (1999)] underlying it is discussed and illustrated by the results of computations for practically important target systems.

  6. Golden rule kinetics of transfer reactions in condensed phase: the microscopic model of electron transfer reactions in disordered solid matrices.

    PubMed

    Basilevsky, M V; Odinokov, A V; Titov, S V; Mitina, E A

    2013-12-21

    postulated in the existing theories of the ET. Our alternative dynamic ET model for local modes immersed in the continuum harmonic medium is formulated for both classical and quantum regimes, and accounts explicitly for the mode∕medium interaction. The kinetics of the energy exchange between the local ET subsystem and the surrounding environment essentially determine the total ET rate. The efficient computer code for rate computations is elaborated on. The computations are available for a wide range of system parameters, such as the temperature, external field, local mode frequency, and characteristics of mode/medium interaction. The relation of the present approach to the Marcus ET theory and to the quantum-statistical reaction rate theory [V. G. Levich and R. R. Dogonadze, Dokl. Akad. Nauk SSSR, Ser. Fiz. Khim. 124, 213 (1959); J. Ulstrup, Charge Transfer in Condensed Media (Springer, Berlin, 1979); M. Bixon and J. Jortner, Adv. Chem. Phys. 106, 35 (1999)] underlying it is discussed and illustrated by the results of computations for practically important target systems.

  7. Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries

    NASA Astrophysics Data System (ADS)

    Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.

    2014-09-01

    Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.

  8. Chemistry in disks. IV. Benchmarking gas-grain chemical models with surface reactions

    NASA Astrophysics Data System (ADS)

    Semenov, D.; Hersant, F.; Wakelam, V.; Dutrey, A.; Chapillon, E.; Guilloteau, St.; Henning, Th.; Launhardt, R.; Piétu, V.; Schreyer, K.

    2010-11-01

    Context. We describe and benchmark two sophisticated chemical models developed by the Heidelberg and Bordeaux astrochemistry groups. Aims: The main goal of this study is to elaborate on a few well-described tests for state-of-the-art astrochemical codes covering a range of physical conditions and chemical processes, in particular those aimed at constraining current and future interferometric observations of protoplanetary disks. Methods: We considered three physical models: a cold molecular cloud core, a hot core, and an outer region of a T Tauri disk. Our chemical network (for both models) is based on the original gas-phase osu_03_2008 ratefile and includes gas-grain interactions and a set of surface reactions for the H-, O-, C-, S-, and N-bearing molecules. The benchmarking was performed with the increasing complexity of the considered processes: (1) the pure gas-phase chemistry, (2) the gas-phase chemistry with accretion and desorption, and (3) the full gas-grain model with surface reactions. The chemical evolution is modeled within 109 years using atomic initial abundances with heavily depleted metals and hydrogen in its molecular form. Results: The time-dependent abundances calculated with the two chemical models are essentially the same for all considered physical cases and for all species, including the most complex polyatomic ions and organic molecules. This result, however, required a lot of effort to make all necessary details consistent through the model runs, e.g., definition of the gas particle density, density of grain surface sites, or the strength and shape of the UV radiation field. Conclusions: The reference models and the benchmark setup, along with the two chemical codes and resulting time-dependent abundances are made publicly available on the internet. This will facilitate and ease the development of other astrochemical models and provide nonspecialists with a detailed description of the model ingredients and requirements to analyze the cosmic

  9. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  10. An algebraic model of an associative noise-like coding memory.

    PubMed

    Bottini, S

    1980-01-01

    A mathematical model of an associative memory is presented, sharing with the optical holography memory systems the properties which establish an analogy with biological memory. This memory system--developed from Gabor's model of memory--is based on a noise-like coding of the information by which it realizes a distributed, damage-tolerant, "equipotential" storage through simultaneous state changes of discrete substratum elements. Each two associated items being stored are coded by each other by means of two noise-like patterns obtained from them through a randomizing preprocessing. The algebraic transformations operating the information storage and retrieval are matrix-vector products involving Toeplitz type matrices. Several noise-like coded memory traces are superimposed on a common substratum without crosstalk interference; moreover, extraneous noise added to these memory traces does not injure the stored information. The main performances shown by this memory model are: i) the selective, complete recovering of stored information from incomplete keys, both mixed with extraneous information and translated from the position learnt; ii) a dynamic recollection where the information just recovered acts as a new key for a sequential retrieval process; iii) context-dependent responses. The hypothesis that the information is stored in the nervous system through a noise-like coding is suggested. The model has been simulated on a digital computer using bidimensional images.

  11. Stimulation at Desert Peak -modeling with the coupled THM code FEHM

    SciTech Connect

    kelkar, sharad

    2013-04-30

    Numerical modeling of the 2011 shear stimulation at the Desert Peak well 27-15. This submission contains the FEHM executable code for a 64-bit PC Windows-7 machine, and the input and output files for the results presented in the included paper from ARMA-213 meeting.

  12. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    ERIC Educational Resources Information Center

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  13. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true Voluntary National Model Building Codes E Exhibit E to Subpart A of Part 1924 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING... OF AGRICULTURE PROGRAM REGULATIONS CONSTRUCTION AND REPAIR Planning and Performing Construction...

  14. A Model Code of Ethics for the Use of Computers in Education.

    ERIC Educational Resources Information Center

    Shere, Daniel T.; Cannings, Terence R.

    Two Delphi studies were conducted by the Ethics and Equity Committee of the International Council for Computers in Education (ICCE) to obtain the opinions of experts on areas that should be covered by ethical guides for the use of computers in education and for software development, and to develop a model code of ethics for each of these areas.…

  15. Dust in tokamaks: An overview of the physical model of the dust in tokamaks code

    NASA Astrophysics Data System (ADS)

    Bacharis, Minas; Coppins, Michael; Allen, John E.

    2010-04-01

    The dynamical behavior of dust produced in tokamaks is an important issue for fusion. In this work, the current status of the dust in tokamaks (DTOKS) [J. D. Martin et al., Europhys Lett. 83, 65001 (2008)] dust transport code will be presented. A detailed description of the various elements of its underlying physical model will be given together with representative simulation results for the mega amp spherical tokamak (MAST) [A. Sykes et al., Nucl. Fusion 41, 1423 (2001)]. Furthermore, a brief description of the various components of the dust transport (DUSTT) [R. D. Smirnov et al., Plasma Phys. Controlled Fusion 49, 347 (2007)] code will also be presented in comparison with DTOKS.

  16. Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1994-01-01

    An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.

  17. Modeling of ion orbit loss and intrinsic toroidal rotation with the COGENT code

    NASA Astrophysics Data System (ADS)

    Dorf, M.; Dorr, M.; Cohen, R.; Rognlien, T.; Hittinger, J.

    2014-10-01

    We discuss recent advances in cross-separatrix neoclassical transport simulations with COGENT, a continuum gyro-kinetic code being developed by the Edge Simulation Laboratory (ESL) collaboration. The COGENT code models the axisymmetric transport properties of edge plasmas including the effects of nonlinear (Fokker-Planck) collisions and a self-consistent electrostatic potential. Our recent work has focused on studies of ion orbit loss and the associated toroidal rotation driven by this mechanism. The results of the COGENT simulations are discussed and analyzed for the parameters of the DIII-D experiment. Work performed for USDOE at LLNL under Contract DE-AC52-07NA27344.

  18. Triple-{alpha} reaction rate constrained by stellar evolution models

    SciTech Connect

    Suda, Takuma; Hirschi, Raphael; Fujimoto, Masayuki Y.

    2012-11-12

    We investigate the quantitative constraint on the triple-{alpha} reaction rate based on stellar evolution theory, motivated by the recent significant revision of the rate proposed by nuclear physics calculations. Targeted stellar models were computed in order to investigate the impact of that rate in the mass range of 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}25 and in the metallicity range between Z= 0 and Z= 0.02. The revised rate has a significant impact on the evolution of low-and intermediate-mass stars, while its influence on the evolution of massive stars (M > 10M{sub Circled-Dot-Operator }) is minimal. We find that employing the revised rate suppresses helium shell flashes on AGB phase for stars in the initial mass range 0.8{<=}M/M{sub Circled-Dot-Operator }{<=}6, which is contradictory to what is observed. The absence of helium shell flashes is due to the weak temperature dependence of the revised triple-{alpha} reaction cross section at the temperature involved. In our models, it is suggested that the temperature dependence of the cross section should have at least {nu} > 10 at T = 1-1.2 Multiplication-Sign 10{sup 8}K where the cross section is proportional to T{sup {nu}}. We also derive the helium ignition curve to estimate the maximum cross section to retain the low-mass first red giants. The semi-analytically derived ignition curves suggest that the reaction rate should be less than {approx} 10{sup -29} cm{sup 6} s{sup -1} mole{sup -2} at Almost-Equal-To 10{sup 7.8} K, which corresponds to about three orders of magnitude larger than that of the NACRE compilation.

  19. A simple modelling of mass diffusion effects on condensation with noncondensable gases for the CATHARE Code

    SciTech Connect

    Coste, P.; Bestion, D.

    1995-09-01

    This paper presents a simple modelling of mass diffusion effects on condensation. In presence of noncondensable gases, the mass diffusion near the interface is modelled using the heat and mass transfer analogy and requires normally an iterative procedure to calculate the interface temperature. Simplifications of the model and of the solution procedure are used without important degradation of the predictions. The model is assessed on experimental data for both film condensation in vertical tubes and direct contact condensation in horizontal tubes, including air-steam, Nitrogen-steam and Helium-steam data. It is implemented in the Cathare code, a french system code for nuclear reactor thermal hydraulics developed by CEA, EDF, and FRAMATOME.

  20. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models.

  1. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  2. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    NASA Technical Reports Server (NTRS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  3. A user's manual for the method of moments Aircraft Modeling Code (AMC)

    NASA Technical Reports Server (NTRS)

    Peters, M. E.; Newman, E. H.

    1989-01-01

    This report serves as a user's manual for the Aircraft Modeling Code or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. The input command language is described and several examples which illustrate typical code inputs and outputs are also included.

  4. A users manual for the method of moments Aircraft Modeling Code (AMC), version 2

    NASA Technical Reports Server (NTRS)

    Peters, M. E.; Newman, E. H.

    1994-01-01

    This report serves as a user's manual for Version 2 of the 'Aircraft Modeling Code' or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. This report describes the input command language and also includes several examples which illustrate typical code inputs and outputs.

  5. MARLEY: Model of Argon Reaction Low Energy Yields

    NASA Astrophysics Data System (ADS)

    Gardiner, Steven; Bilton, Kyle; Grant, Christopher; Pantic, Emilija; Svoboda, Robert

    2015-10-01

    Core-collapse supernovae are sources of tremendous numbers of neutrinos with energies of up to about 50 MeV. In recent years, there has been growing interest in building detectors that are sensitive to supernova neutrinos. Such detectors can provide information about the initial stages of stellar collapse, early warning signals for light emission from supernovae, and opportunities to study neutrino oscillation physics over astronomical distances. In an effort to enable supernova neutrino detection in next-generation experiments like DUNE, the CAPTAIN collaboration plans to make the first direct measurement of cross sections for neutrino interactions on argon in the supernova energy regime. To help predict neutrino event signatures in the CAPTAIN liquid argon time projection chamber (LArTPC), we have developed a first-of-its-kind Monte Carlo event generator called MARLEY (Model of Argon Reaction Low Energy Yields). This generator attempts to model the complicated nuclear structure dependence of low-energy neutrino-nucleus reactions in sufficient detail for use in LArTPC simulations. In this talk we present some preliminary results calculated using MARLEY and discuss how the current version of the generator may be improved and expanded.

  6. Analytical model for heterogeneous reactions in mixed porous media

    SciTech Connect

    Hatfield, K.; Burris, D.R.; Wolfe, N.L.

    1996-08-01

    The funnel/gate system is a developing technology for passive ground-water plume management and treatment. This technology uses sheet pilings as a funnel to force polluted ground water through a highly permeable zone of reactive porous media (the gate) where contaminants are degraded by biotic or abiotic heterogeneous reactions. This paper presents a new analytical nonequilibrium model for solute transport in saturated, nonhomogeneous or mixed porous media that could assist efforts to design funnel/gate systems and predict their performance. The model incorporates convective/dispersion transport, dissolved constituent decay, surface-mediated degradation, and time-dependent mass transfer between phases. Simulation studies of equilibrium and nonequilibrium transport conditions reveal manifestations of rate-limited degradation when mass-transfer times are longer than system hydraulic residence times, or when surface-mediated reaction rates are faster than solute mass-transfer processes (i.e., sorption, film diffusion, or intraparticle diffusion). For example, steady-state contaminant concentrations will be higher under a nonequilibrium transport scenario than would otherwise be expected when assuming equilibrium conditions. Thus, a funnel/gate system may fail to achieve desired ground-water treatment if the possibility of mass-transfer-limited degradation is not considered.

  7. In silico strain optimization by adding reactions to metabolic models.

    PubMed

    Correia, Sara; Rocha, Miguel

    2012-07-24

    Nowadays, the concerns about the environment and the needs to increase the productivity at low costs, demand for the search of new ways to produce compounds with industrial interest. Based on the increasing knowledge of biological processes, through genome sequencing projects, and high-throughput experimental techniques as well as the available computational tools, the use of microorganisms has been considered as an approach to produce desirable compounds. However, this usually requires to manipulate these organisms by genetic engineering and/ or changing the enviromental conditions to make the production of these compounds possible. In many cases, it is necessary to enrich the genetic material of those microbes with hereologous pathways from other species and consequently adding the potential to produce novel compounds. This paper introduces a new plug-in for the OptFlux Metabolic Engineering platform, aimed at finding suitable sets of reactions to add to the genomes of selected microbes (wild type strain), as well as finding complementary sets of deletions, so that the mutant becomes able to overproduce compounds with industrial interest, while preserving their viability. The necessity of adding reactions to the metabolic model arises from existing gaps in the original model or motivated by the productions of new compounds by the organism. The optimization methods used are metaheuristics such as Evolutionary Algorithms and Simulated Annealing. The usefulness of this plug-in is demonstrated by a case study, regarding the production of vanillin by the bacterium E. coli.

  8. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  9. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  10. Subgrid Scale Modeling in Solar Convection Simulations Using the ASH Code

    DTIC Science & Technology

    2003-12-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP014789 TITLE: Subgrid Scale Modeling in Solar Convection Simulations...ADP014788 thru ADP014827 UNCLASSIFIED Center for Turbulence Research 15 Annual Research Briefs 2003 Subgrid scale modeling in solar convection simulations...using the ASH code By Y.-N. Young, M. Miescht and N. N. Mansour 1. Motivation and objectives The turbulent solar convection zone has remained one of

  11. A Transport Model for Nuclear Reactions Induced by Radioactive Beams

    SciTech Connect

    Li Baoan; Chen Liewen; Das, Champak B.; Das Gupta, Subal; Gale, Charles; Ko, C.M.; Yong, G.-C.; Zuo Wei

    2005-10-14

    Major ingredients of an isospin and momentum dependent transport model for nuclear reactions induced by radioactive beams are outlined. Within the IBUU04 version of this model we study several experimental probes of the equation of state of neutron-rich matter, especially the density dependence of the nuclear symmetry energy. Comparing with the recent experimental data from NSCL/MSU on isospin diffusion, we found a nuclear symmetry energy of Esym({rho}) {approx_equal} 31.6({rho}/{rho}0)1.05 at subnormal densities. Predictions on several observables sensitive to the density dependence of the symmetry energy at supranormal densities accessible at GSI and the planned Rare Isotope Accelerator (RIA) are also made.

  12. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    SciTech Connect

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; March-Leuba, Jose A; Thurston, Carl; Hudson, Nathanael H.; Ireland, A.; Wysocki, A.

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, the capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.

  13. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  14. CPAT: Coding-Potential Assessment Tool using an alignment-free logistic regression model

    PubMed Central

    Wang, Liguo; Park, Hyun Jung; Dasari, Surendra; Wang, Shengqin; Kocher, Jean-Pierre; Li, Wei

    2013-01-01

    Thousands of novel transcripts have been identified using deep transcriptome sequencing. This discovery of large and ‘hidden’ transcriptome rejuvenates the demand for methods that can rapidly distinguish between coding and noncoding RNA. Here, we present a novel alignment-free method, Coding Potential Assessment Tool (CPAT), which rapidly recognizes coding and noncoding transcripts from a large pool of candidates. To this end, CPAT uses a logistic regression model built with four sequence features: open reading frame size, open reading frame coverage, Fickett TESTCODE statistic and hexamer usage bias. CPAT software outperformed (sensitivity: 0.96, specificity: 0.97) other state-of-the-art alignment-based software such as Coding-Potential Calculator (sensitivity: 0.99, specificity: 0.74) and Phylo Codon Substitution Frequencies (sensitivity: 0.90, specificity: 0.63). In addition to high accuracy, CPAT is approximately four orders of magnitude faster than Coding-Potential Calculator and Phylo Codon Substitution Frequencies, enabling its users to process thousands of transcripts within seconds. The software accepts input sequences in either FASTA- or BED-formatted data files. We also developed a web interface for CPAT that allows users to submit sequences and receive the prediction results almost instantly. PMID:23335781

  15. Mass transfer model for two-layer TBP oxidation reactions

    SciTech Connect

    Laurinat, J.E.

    1994-09-28

    To prove that two-layer, TBP-nitric acid mixtures can be safely stored in the canyon evaporators, it must be demonstrated that a runaway reaction between TBP and nitric acid will not occur. Previous bench-scale experiments showed that, at typical evaporator temperatures, this reaction is endothermic and therefore cannot run away, due to the loss of heat from evaporation of water in the organic layer. However, the reaction would be exothermic and could run away if the small amount of water in the organic layer evaporates before the nitric acid in this layer is consumed by the reaction. Provided that there is enough water in the aqueous layer, this would occur if the organic layer is sufficiently thick so that the rate of loss of water by evaporation exceeds the rate of replenishment due to mixing with the aqueous layer. This report presents measurements of mass transfer rates for the mixing of water and butanol in two-layer, TBP-aqueous mixtures, where the top layer is primarily TBP and the bottom layer is comprised of water or aqueous salt solution. Mass transfer coefficients are derived for use in the modeling of two-layer TBP-nitric acid oxidation experiments. Three cases were investigated: (1) transfer of water into the TBP layer with sparging of both the aqueous and TBP layers, (2) transfer of water into the TBP layer with sparging of just the TBP layer, and (3) transfer of butanol into the aqueous layer with sparging of both layers. The TBP layer was comprised of 99% pure TBP (spiked with butanol for the butanol transfer experiments), and the aqueous layer was comprised of either water or an aluminum nitrate solution. The liquid layers were air sparged to simulate the mixing due to the evolution of gases generated by oxidation reactions. A plastic tube and a glass frit sparger were used to provide different size bubbles. Rates of mass transfer were measured using infrared spectrophotometers provided by SRTC/Analytical Development.

  16. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    NASA Astrophysics Data System (ADS)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  17. Helioseismic Constraints on New Solar Models from the MoSEC Code

    NASA Technical Reports Server (NTRS)

    Elliott, J. R.

    1998-01-01

    Evolutionary solar models are computed using a new stellar evolution code, MOSEC (Modular Stellar Evolution Code). This code has been designed with carefully controlled truncation errors in order to achieve a precision which reflects the increasingly accurate determination of solar interior structure by helioseismology. A series of models is constructed to investigate the effects of the choice of equation of state (OPAL or MHD-E, the latter being a version of the MHD equation of state recalculated by the author), the inclusion of helium and heavy-element settling and diffusion, and the inclusion of a simple model of mixing associated with the solar tachocline. The neutrino flux predictions are discussed, while the sound speed of the computed models is compared to that of the sun via the latest inversion of SOI-NMI p-mode frequency data. The comparison between models calculated with the OPAL and MHD-E equations of state is particularly interesting because the MHD-E equation of state includes relativistic effects for the electrons, whereas neither MHD nor OPAL do. This has a significant effect on the sound speed of the computed model, worsening the agreement with the solar sound speed. Using the OPAL equation of state and including the settling and diffusion of helium and heavy elements produces agreement in sound speed with the helioseismic results to within about +.-0.2%; the inclusion of mixing slightly improves the agreement.

  18. Ions interacting with planar aromatic molecules: Modeling electron transfer reactions

    SciTech Connect

    Forsberg, B. O.; Alexander, J. D.; Chen, T.; Pettersson, A. T.; Gatchell, M.; Cederquist, H.; Zettergren, H.

    2013-02-07

    We present theoretical absolute charge exchange cross sections for multiply charged cations interacting with the Polycyclic Aromatic Hydrocarbon (PAH) molecules pyrene C{sub 14}H{sub 10}, coronene C{sub 24}H{sub 12}, or circumcoronene C{sub 54}H{sub 18}. These planar, nearly circular, PAHs are modelled as conducting, infinitely thin, and perfectly circular discs, which are randomly oriented with respect to straight line ion trajectories. We present the analytical solution for the potential energy surface experienced by an electron in the field of such a charged disc and a point-charge at an arbitrary position. The location and height of the corresponding potential energy barrier from this simple model are in close agreement with those from much more computationally demanding Density Functional Theory (DFT) calculations in a number of test cases. The model results compare favourably with available experimental data on single- and multiple electron transfer reactions and we demonstrate that it is important to include the orientation dependent polarizabilities of the molecules (model discs) in particular for the larger PAHs. PAH ionization energy sequences from DFT are tabulated and used as model inputs. Absolute cross sections for the ionization of PAH molecules, and PAH ionization energies such as the ones presented here may be useful when considering the roles of PAHs and their ions in, e.g., interstellar chemistry, stellar atmospheres, and in related photoabsorption and photoemission spectroscopies.

  19. New high burnup fuel models for NRC`s licensing audit code, FRAPCON

    SciTech Connect

    Lanning, D.D.; Beyer, C.E.; Painter, C.L.

    1996-03-01

    Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data.

  20. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    SciTech Connect

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  1. Using reactive transport codes to provide mechanistic biogeochemistry representations in global land surface models: CLM-PFLOTRAN 1.0

    NASA Astrophysics Data System (ADS)

    Tang, G.; Yuan, F.; Bisht, G.; Hammond, G. E.; Lichtner, P. C.; Kumar, J.; Mills, R. T.; Xu, X.; Andre, B.; Hoffman, F. M.; Painter, S. L.; Thornton, P. E.

    2015-12-01

    We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN, the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60-100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10-3 to 10-9 mol m-3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very tight relative

  2. Using reactive transport codes to provide mechanistic biogeochemistry representations in global land surface models: CLM-PFLOTRAN 1.0

    DOE PAGES

    Tang, G.; Yuan, F.; Bisht, G.; ...

    2015-12-17

    We explore coupling to a configurable subsurface reactive transport code as a flexible and extensible approach to biogeochemistry in land surface models; our goal is to facilitate testing of alternative models and incorporation of new understanding. A reaction network with the CLM-CN decomposition, nitrification, denitrification, and plant uptake is used as an example. We implement the reactions in the open-source PFLOTRAN code, coupled with the Community Land Model (CLM), and test at Arctic, temperate, and tropical sites. To make the reaction network designed for use in explicit time stepping in CLM compatible with the implicit time stepping used in PFLOTRAN,more » the Monod substrate rate-limiting function with a residual concentration is used to represent the limitation of nitrogen availability on plant uptake and immobilization. To achieve accurate, efficient, and robust numerical solutions, care needs to be taken to use scaling, clipping, or log transformation to avoid negative concentrations during the Newton iterations. With a tight relative update tolerance to avoid false convergence, an accurate solution can be achieved with about 50 % more computing time than CLM in point mode site simulations using either the scaling or clipping methods. The log transformation method takes 60–100 % more computing time than CLM. The computing time increases slightly for clipping and scaling; it increases substantially for log transformation for half saturation decrease from 10−3 to 10−9 mol m−3, which normally results in decreasing nitrogen concentrations. The frequent occurrence of very low concentrations (e.g. below nanomolar) can increase the computing time for clipping or scaling by about 20 %; computing time can be doubled for log transformation. Caution needs to be taken in choosing the appropriate scaling factor because a small value caused by a negative update to a small concentration may diminish the update and result in false convergence even with very

  3. PACER -- A fast running computer code for the calculation of short-term containment/confinement loads following coolant boundary failure. Volume 1: Code models and correlations

    SciTech Connect

    Sienicki, J.J.

    1997-06-01

    A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. Flashing from coolant release, condensation heat transfer, intercompartment transport, and engineered safety features are described using best estimate models and correlations often based upon experiment analyses. Two notable capabilities of PACER that differ from most other containment loads codes are the modeling of the rates of steam and water formation accompanying coolant release as well as the correlations for steam condensation upon structure.

  4. A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations.

    PubMed

    Battista, Nicholas A; Baird, Austin J; Miller, Laura A

    2015-11-01

    This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes.

  5. Spike-based probabilistic inference in analog graphical models using interspike-interval coding.

    PubMed

    Steimer, Andreas; Douglas, Rodney

    2013-09-01

    Temporal spike codes play a crucial role in neural information processing. In particular, there is strong experimental evidence that interspike intervals (ISIs) are used for stimulus representation in neural systems. However, very few algorithmic principles exploit the benefits of such temporal codes for probabilistic inference of stimuli or decisions. Here, we describe and rigorously prove the functional properties of a spike-based processor that uses ISI distributions to perform probabilistic inference. The abstract processor architecture serves as a building block for more concrete, neural implementations of the belief-propagation (BP) algorithm in arbitrary graphical models (e.g., Bayesian networks and factor graphs). The distributed nature of graphical models matches well with the architectural and functional constraints imposed by biology. In our model, ISI distributions represent the BP messages exchanged between factor nodes, leading to the interpretation of a single spike as a random sample that follows such a distribution. We verify the abstract processor model by numerical simulation in full graphs, and demonstrate that it can be applied even in the presence of analog variables. As a particular example, we also show results of a concrete, neural implementation of the processor, although in principle our approach is more flexible and allows different neurobiological interpretations. Furthermore, electrophysiological data from area LIP during behavioral experiments are assessed in light of ISI coding, leading to concrete testable, quantitative predictions and a more accurate description of these data compared to hitherto existing models.

  6. Coding theory based models for protein translation initiation in prokaryotic organisms.

    PubMed

    May, Elebeoba E; Vouk, Mladen A; Bitzer, Donald L; Rosnick, David I

    2004-01-01

    Our research explores the feasibility of using communication theory, error control (EC) coding theory specifically, for quantitatively modeling the protein translation initiation mechanism. The messenger RNA (mRNA) of Escherichia coli K-12 is modeled as a noisy (errored), encoded signal and the ribosome as a minimum Hamming distance decoder, where the 16S ribosomal RNA (rRNA) serves as a template for generating a set of valid codewords (the codebook). We tested the E. coli based coding models on 5' untranslated leader sequences of prokaryotic organisms of varying taxonomical relation to E. coli including: Salmonella typhimurium LT2, Bacillus subtilis, and Staphylococcus aureus Mu50. The model identified regions on the 5' untranslated leader where the minimum Hamming distance values of translated mRNA sub-sequences and non-translated genomic sequences differ the most. These regions correspond to the Shine-Dalgarno domain and the non-random domain. Applying the EC coding-based models to B. subtilis, and S. aureus Mu50 yielded results similar to those for E. coli K-12. Contrary to our expectations, the behavior of S. typhimurium LT2, the more taxonomically related to E. coli, resembled that of the non-translated sequence group.

  7. Modeling precipitation from concentrated solutions with the EQ3/6 chemical speciation codes

    SciTech Connect

    Brown, L.F.; Ebinger, M.H.

    1995-01-13

    One of the more important uncertainties of using chemical speciation codes to study dissolution and precipitation of compounds is the results of modeling which depends on the particular thermodynamic database being used. The authors goal is to investigate the effects of different thermodynamic databases on modeling precipitation from concentrated solutions. They used the EQ3/6 codes and the supplied databases to model precipitation in this paper. One aspect of this goal is to compare predictions of precipitation from ideal solutions to similar predictions from nonideal solutions. The largest thermodynamic databases available for use by EQ3/6 assume that solutions behave ideally. However, two databases exist that allow modeling nonideal solutions. The two databases are much less extensive than the ideal solution data, and they investigated the comparability of modeling ideal solutions and nonideal solutions. They defined four fundamental problems to test the EQ3/6 codes in concentrated solutions. Two problems precipitate Ca(OH){sub 2} from solutions concentrated in Ca{sup ++}. One problem tests the precipitation of Ca(OH){sub 2} from high ionic strength (high concentration) solutions that are low in the concentrations of precipitating species (Ca{sup ++} in this case). The fourth problem evaporates the supernatant of the problem with low concentrations of precipitating species. The specific problems are discussed.

  8. Wind turbine control systems: Dynamic model development using system identification and the fast structural dynamics code

    SciTech Connect

    Stuart, J.G.; Wright, A.D.; Butterfield, C.P.

    1996-10-01

    Mitigating the effects of damaging wind turbine loads and responses extends the lifetime of the turbine and, consequently, reduces the associated Cost of Energy (COE). Active control of aerodynamic devices is one option for achieving wind turbine load mitigation. Generally speaking, control system design and analysis requires a reasonable dynamic model of {open_quotes}plant,{close_quotes} (i.e., the system being controlled). This paper extends the wind turbine aileron control research, previously conducted at the National Wind Technology Center (NWTC), by presenting a more detailed development of the wind turbine dynamic model. In prior research, active aileron control designs were implemented in an existing wind turbine structural dynamics code, FAST (Fatigue, Aerodynamics, Structures, and Turbulence). In this paper, the FAST code is used, in conjunction with system identification, to generate a wind turbine dynamic model for use in active aileron control system design. The FAST code is described and an overview of the system identification technique is presented. An aileron control case study is used to demonstrate this modeling technique. The results of the case study are then used to propose ideas for generalizing this technique for creating dynamic models for other wind turbine control applications.

  9. Coding theory based models for protein translation initiation in prokaryotic organisms.

    SciTech Connect

    May, Elebeoba Eni; Bitzer, Donald L. (North Carolina State University, Raleigh, NC); Rosnick, David I. (North Carolina State University, Raleigh, NC); Vouk, Mladen A.

    2003-03-01

    Our research explores the feasibility of using communication theory, error control (EC) coding theory specifically, for quantitatively modeling the protein translation initiation mechanism. The messenger RNA (mRNA) of Escherichia coli K-12 is modeled as a noisy (errored), encoded signal and the ribosome as a minimum Hamming distance decoder, where the 16S ribosomal RNA (rRNA) serves as a template for generating a set of valid codewords (the codebook). We tested the E. coli based coding models on 5' untranslated leader sequences of prokaryotic organisms of varying taxonomical relation to E. coli including: Salmonella typhimurium LT2, Bacillus subtilis, and Staphylococcus aureus Mu50. The model identified regions on the 5' untranslated leader where the minimum Hamming distance values of translated mRNA sub-sequences and non-translated genomic sequences differ the most. These regions correspond to the Shine-Dalgarno domain and the non-random domain. Applying the EC coding-based models to B. subtilis, and S. aureus Mu50 yielded results similar to those for E. coli K-12. Contrary to our expectations, the behavior of S. typhimurium LT2, the more taxonomically related to E. coli, resembled that of the non-translated sequence group.

  10. SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2

    SciTech Connect

    Davis, K.L.; Allison, C.M.; Berna, G.A.

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made.

  11. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    NASA Astrophysics Data System (ADS)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  12. Relating Population-Code Representations between Man, Monkey, and Computational Models.

    PubMed

    Kriegeskorte, Nikolaus

    2009-01-01

    Perceptual and cognitive content is thought to be represented in the brain by patterns of activity across populations of neurons. In order to test whether a computational model can explain a given population code and whether corresponding codes in man and monkey convey the same information, we need to quantitatively relate population-code representations. Here I give a brief introduction to representational similarity analysis, a particular approach to this problem. A population code is characterized by a representational dissimilarity matrix (RDM), which contains a dissimilarity for each pair of activity patterns elicited by a given stimulus set. The RDM encapsulates which distinctions the representation emphasizes and which it deemphasizes. By analyzing correlations between RDMs we can test models and compare different species. Moreover, we can study how representations are transformed across stages of processing and how they relate to behavioral measures of object similarity. We use an example from object vision to illustrate the method's potential to bridge major divides that have hampered progress in systems neuroscience.

  13. Linking impulse response functions to reaction time: Rod and cone reaction time data and a computational model

    PubMed Central

    Cao, Dingcai; Zele, Andrew J.; Pokorny, Joel

    2007-01-01

    Reaction times for incremental and decremental stimuli were measured at five suprathreshold contrasts for six retinal illuminance levels where rods alone (0.002–0.2 Trolands), rods and cones (2–20 Trolands) or cones alone (200 Trolands) mediated detection. A 4-primary photostimulator allowed independent control of rod or cone excitations. This is the first report of reaction times to isolated rod or cone stimuli at mesopic light levels under the same adaptation conditions. The main findings are: 1) For rods, responses to decrements were faster than increments, but cone reaction times were closely similar. 2) At light levels where both systems were functional, rod reaction times were ~20 ms longer. The data were fitted with a computational model that incorporates rod and cone impulse response functions and a stimulus-dependent neural sensory component that triggers a motor response. Rod and cone impulse response functions were derived from published psychophysical two-pulse threshold data and temporal modulation transfer functions. The model fits were accomplished with a limited number of free parameters: two global parameters to estimate the irreducible minimum reaction time for each receptor type, and one local parameter for each reaction time versus contrast function. This is the first model to provide a neural basis for the variation in reaction time with retinal illuminance, stimulus contrast, stimulus polarity, and receptor class modulated. PMID:17346763

  14. SOCIAL ADVERSITY, GENETIC VARIATION, STREET CODE, AND AGGRESSION: A GENETICLLY INFORMED MODEL OF VIOLENT BEHAVIOR

    PubMed Central

    Simons, Ronald L.; Lei, Man Kit; Stewart, Eric A.; Brody, Gene H.; Beach, Steven R. H.; Philibert, Robert A.; Gibbons, Frederick X.

    2011-01-01

    Elijah Anderson (1997, 1999) argues that exposure to extreme community disadvantage, residing in “street” families, and persistent discrimination encourage many African Americans to develop an oppositional culture that he labels the “code of the street.” Importantly, while the adverse conditions described by Anderson increase the probability of adopting the code of the street, most of those exposed to these adverse conditions do not do so. The present study examines the extent to which genetic variation accounts for these differences. Although the diathesis-stress model guides most genetically informed behavior science, the present study investigates hypotheses derived from the differential susceptibility perspective (Belsky & Pluess, 2009). This model posits that some people are genetically predisposed to be more susceptible to environmental influence than others. An important implication of the model is that those persons most vulnerable to adverse social environments are the same ones who reap the most benefit from environmental support. Using longitudinal data from a sample of several hundred African American males, we examined the manner in which variants in three genes - 5-HTT, DRD4, and MAOA - modulate the effect of community and family adversity on adoption of the street code and aggression. We found strong support for the differential susceptibility perspective. When the social environment was adverse, individuals with these genetic variants manifested more commitment to the street code and aggression than those with other genotypes, whereas when adversity was low they demonstrated less commitment to the street code and aggression than those with other genotypes. PMID:23785260

  15. A Reaction-Diffusion Model of Cholinergic Retinal Waves

    PubMed Central

    Lansdell, Benjamin; Ford, Kevin; Kutz, J. Nathan

    2014-01-01

    Prior to receiving visual stimuli, spontaneous, correlated activity in the retina, called retinal waves, drives activity-dependent developmental programs. Early-stage waves mediated by acetylcholine (ACh) manifest as slow, spreading bursts of action potentials. They are believed to be initiated by the spontaneous firing of Starburst Amacrine Cells (SACs), whose dense, recurrent connectivity then propagates this activity laterally. Their inter-wave interval and shifting wave boundaries are the result of the slow after-hyperpolarization of the SACs creating an evolving mosaic of recruitable and refractory cells, which can and cannot participate in waves, respectively. Recent evidence suggests that cholinergic waves may be modulated by the extracellular concentration of ACh. Here, we construct a simplified, biophysically consistent, reaction-diffusion model of cholinergic retinal waves capable of recapitulating wave dynamics observed in mice retina recordings. The dense, recurrent connectivity of SACs is modeled through local, excitatory coupling occurring via the volume release and diffusion of ACh. In addition to simulation, we are thus able to use non-linear wave theory to connect wave features to underlying physiological parameters, making the model useful in determining appropriate pharmacological manipulations to experimentally produce waves of a prescribed spatiotemporal character. The model is used to determine how ACh mediated connectivity may modulate wave activity, and how parameters such as the spontaneous activation rate and sAHP refractory period contribute to critical wave size variability. PMID:25474327

  16. Reaction Diffusion Modeling of Calcium Dynamics with Realistic ER Geometry

    PubMed Central

    Means, Shawn; Smith, Alexander J.; Shepherd, Jason; Shadid, John; Fowler, John; Wojcikiewicz, Richard J. H.; Mazel, Tomas; Smith, Gregory D.; Wilson, Bridget S.

    2006-01-01

    We describe a finite-element model of mast cell calcium dynamics that incorporates the endoplasmic reticulum's complex geometry. The model is built upon a three-dimensional reconstruction of the endoplasmic reticulum (ER) from an electron tomographic tilt series. Tetrahedral meshes provide volumetric representations of the ER lumen, ER membrane, cytoplasm, and plasma membrane. The reaction-diffusion model simultaneously tracks changes in cytoplasmic and ER intraluminal calcium concentrations and includes luminal and cytoplasmic protein buffers. Transport fluxes via PMCA, SERCA, ER leakage, and Type II IP3 receptors are also represented. Unique features of the model include stochastic behavior of IP3 receptor calcium channels and comparisons of channel open times when diffusely distributed or aggregated in clusters on the ER surface. Simulations show that IP3R channels in close proximity modulate activity of their neighbors through local Ca2+ feedback effects. Cytoplasmic calcium levels rise higher, and ER luminal calcium concentrations drop lower, after IP3-mediated release from receptors in the diffuse configuration. Simulation results also suggest that the buffering capacity of the ER, and not restricted diffusion, is the predominant factor influencing average luminal calcium concentrations. PMID:16617072

  17. An Equilibrium-Based Model of Gas Reaction and Detonation

    SciTech Connect

    Trowbridge, L.D.

    2000-04-01

    During gaseous diffusion plant operations, conditions leading to the formation of flammable gas mixtures may occasionally arise. Currently, these could consist of the evaporative coolant CFC-114 and fluorinating agents such as F2 and ClF3. Replacement of CFC-114 with a non-ozone-depleting substitute is planned. Consequently, in the future, the substitute coolant must also be considered as a potential fuel in flammable gas mixtures. Two questions of practical interest arise: (1) can a particular mixture sustain and propagate a flame if ignited, and (2) what is the maximum pressure that can be generated by the burning (and possibly exploding) gas mixture, should it ignite? Experimental data on these systems, particularly for the newer coolant candidates, are limited. To assist in answering these questions, a mathematical model was developed to serve as a tool for predicting the potential detonation pressures and for estimating the composition limits of flammability for these systems based on empirical correlations between gas mixture thermodynamics and flammability for known systems. The present model uses the thermodynamic equilibrium to determine the reaction endpoint of a reactive gas mixture and uses detonation theory to estimate an upper bound to the pressure that could be generated upon ignition. The model described and documented in this report is an extended version of related models developed in 1992 and 1999.

  18. A multi-pathway model for photosynthetic reaction center

    NASA Astrophysics Data System (ADS)

    Qin, M.; Shen, H. Z.; Yi, X. X.

    2016-03-01

    Charge separation occurs in a pair of tightly coupled chlorophylls at the heart of photosynthetic reaction centers of both plants and bacteria. Recently it has been shown that quantum coherence can, in principle, enhance the efficiency of a solar cell, working like a quantum heat engine. Here, we propose a biological quantum heat engine (BQHE) motivated by Photosystem II reaction center (PSII RC) to describe the charge separation. Our model mainly considers two charge-separation pathways which is more than that typically considered in the published literature. We explore how these cross-couplings increase the current and power of the charge separation and discuss the effects of multiple pathways in terms of current and power. The robustness of the BQHE against the charge recombination in natural PSII RC and dephasing induced by environments is also explored, and extension from two pathways to multiple pathways is made. These results suggest that noise-induced quantum coherence helps to suppress the influence of acceptor-to-donor charge recombination, and besides, nature-mimicking architectures with engineered multiple pathways for charge separations might be better for artificial solar energy devices considering the influence of environments.

  19. Benchmark studies of the Bending Corrected Rotating Linear Model (BCRLM) reactive scattering code: Implications for accurate quantum calculations

    SciTech Connect

    Hayes, E.F.; Darakjian, Z. . Dept. of Chemistry); Walker, R.B. )

    1990-01-01

    The Bending Corrected Rotating Linear Model (BCRLM), developed by Hayes and Walker, is a simple approximation to the true multidimensional scattering problem for reaction of the type: A + BC {yields} AB + C. While the BCRLM method is simpler than methods designed to obtain accurate three dimensional quantum scattering results, this turns out to be a major advantage in terms of our benchmarking studies. The computer code used to obtain BCRLM scattering results is written for the most part in standard FORTRAN and has been reported to several scalar, vector, and parallel architecture computers including the IBM 3090-600J, the Cray XMP and YMP, the Ardent Titan, IBM RISC System/6000, Convex C-1 and the MIPS 2000. Benchmark results will be reported for each of these machines with an emphasis on comparing the scalar, vector, and parallel performance for the standard code with minimum modifications. Detailed analysis of the mapping of the BCRLM approach onto both shared and distributed memory parallel architecture machines indicates the importance of introducing several key changes in the basic strategy and algorithums used to calculate scattering results. This analysis of the BCRLM approach provides some insights into optimal strategies for mapping three dimensional quantum scattering methods, such as the Parker-Pack method, onto shared or distributed memory parallel computers.

  20. Numerical modeling of immiscible two-phase flow in micro-models using a commercial CFD code

    SciTech Connect

    Crandall, Dustin; Ahmadia, Goodarz; Smith, Duane H.

    2009-01-01

    Off-the-shelf CFD software is being used to analyze everything from flow over airplanes to lab-on-a-chip designs. So, how accurately can two-phase immiscible flow be modeled flowing through some small-scale models of porous media? We evaluate the capability of the CFD code FLUENT{trademark} to model immiscible flow in micro-scale, bench-top stereolithography models. By comparing the flow results to experimental models we show that accurate 3D modeling is possible.

  1. EQ6, a computer program for reaction path modeling of aqueous geochemical systems: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 4

    SciTech Connect

    Wolery, T.J.; Daveler, S.A.

    1992-10-09

    EQ6 is a FORTRAN computer program in the EQ3/6 software package (Wolery, 1979). It calculates reaction paths (chemical evolution) in reacting water-rock and water-rock-waste systems. Speciation in aqueous solution is an integral part of these calculations. EQ6 computes models of titration processes (including fluid mixing), irreversible reaction in closed systems, irreversible reaction in some simple kinds of open systems, and heating or cooling processes, as well as solve ``single-point`` thermodynamic equilibrium problems. A reaction path calculation normally involves a sequence of thermodynamic equilibrium calculations. Chemical evolution is driven by a set of irreversible reactions (i.e., reactions out of equilibrium) and/or changes in temperature and/or pressure. These irreversible reactions usually represent the dissolution or precipitation of minerals or other solids. The code computes the appearance and disappearance of phases in solubility equilibrium with the water. It finds the identities of these phases automatically. The user may specify which potential phases are allowed to form and which are not. There is an option to fix the fugacities of specified gas species, simulating contact with a large external reservoir. Rate laws for irreversible reactions may be either relative rates or actual rates. If any actual rates are used, the calculation has a time frame. Several forms for actual rate laws are programmed into the code. EQ6 is presently able to model both mineral dissolution and growth kinetics.

  2. Model-based coding of facial images based on facial muscle motion through isodensity maps

    NASA Astrophysics Data System (ADS)

    So, Ikken; Nakamura, Osamu; Minami, Toshi

    1991-11-01

    A model-based coding system has come under serious consideration for the next generation of image coding schemes, aimed at greater efficiency in TV telephone and TV conference systems. In this model-based coding system, the sender's model image is transmitted and stored at the receiving side before the start of the conversation. During the conversation, feature points are extracted from the facial image of the sender and are transmitted to the receiver. The facial expression of the sender facial is reconstructed from the feature points received and a wireframed model constructed at the receiving side. However, the conventional methods have the following problems: (1) Extreme changes of the gray level, such as in wrinkles caused by change of expression, cannot be reconstructed at the receiving side. (2) Extraction of stable feature points from facial images with irregular features such as spectacles or facial hair is very difficult. To cope with the first problem, a new algorithm based on isodensity lines which can represent detailed changes in expression by density correction has already been proposed and good results obtained. As for the second problem, we propose in this paper a new algorithm to reconstruct facial images by transmitting other feature points extracted from isodensity maps.

  3. Apparent Motion Suppresses Responses in Early Visual Cortex: A Population Code Model

    PubMed Central

    Van Humbeeck, Nathalie; Putzeys, Tom; Wagemans, Johan

    2016-01-01

    Two stimuli alternately presented at different locations can evoke a percept of a stimulus continuously moving between the two locations. The neural mechanism underlying this apparent motion (AM) is thought to be increased activation of primary visual cortex (V1) neurons tuned to locations along the AM path, although evidence remains inconclusive. AM masking, which refers to the reduced detectability of stimuli along the AM path, has been taken as evidence for AM-related V1 activation. AM-induced neural responses are thought to interfere with responses to physical stimuli along the path and as such impair the perception of these stimuli. However, AM masking can also be explained by predictive coding models, predicting that responses to stimuli presented on the AM path are suppressed when they match the spatio-temporal prediction of a stimulus moving along the path. In the present study, we find that AM has a distinct effect on the detection of target gratings, limiting the maximum performance at high contrast levels. This masking is strongest when the target orientation is identical to the orientation of the inducers. We developed a V1-like population code model of early visual processing, based on a standard contrast normalization model. We find that AM-related activation in early visual cortex is too small to either cause masking or to be perceived as motion. Our model instead predicts strong suppression of early sensory responses during AM, consistent with the theoretical framework of predictive coding. PMID:27783622

  4. Modelling the Maillard reaction during the cooking of a model cheese.

    PubMed

    Bertrand, Emmanuel; Meyer, Xuân-Mi; Machado-Maturana, Elizabeth; Berdagué, Jean-Louis; Kondjoyan, Alain

    2015-10-01

    During processing and storage of industrial processed cheese, odorous compounds are formed. Some of them are potentially unwanted for the flavour of the product. To reduce the appearance of these compounds, a methodological approach was employed. It consists of: (i) the identification of the key compounds or precursors responsible for the off-flavour observed, (ii) the monitoring of these markers during the heat treatments applied to the cheese medium, (iii) the establishment of an observable reaction scheme adapted from a literature survey to the compounds identified in the heated cheese medium (iv) the multi-responses stoichiokinetic modelling of these reaction markers. Systematic two-dimensional gas chromatography time-of-flight mass spectrometry was used for the semi-quantitation of trace compounds. Precursors were quantitated by high-performance liquid chromatography. The experimental data obtained were fitted to the model with 14 elementary linked reactions forming a multi-response observable reaction scheme.

  5. A reaction-diffusion model for long bones growth.

    PubMed

    Garzón-Alvarado, D A; García-Aznar, J M; Doblaré, M

    2009-10-01

    Bone development is characterized by differentiation and growth of chondrocytes from the proliferation zone to the hypertrophying one. These two cellular processes are controlled by a complex signalling regulatory loop between different biochemical signals, whose production depends on the current cell density, constituting a coupled cell-chemical system. In this work, a mathematical model of the process of early bone growth is presented, extending and generalizing other earlier approaches on the same topic. A reaction-diffusion regulatory loop between two chemical factors: parathyroid hormone-related peptide (PTHrP) and Indian hedgehog (Ihh) is hypothesized, where PTHrP is activated by Ihh and inhibits Ihh production. Chondrocytes proliferation and hypertrophy are described by means of population equations being both regulated by the PTHrP and Ihh concentrations. In the initial stage of bone growth, these two cellular proceses are considered to be directionally dependent, modelling the well known column cell formation, characteristic of endochondral ossification. This coupled set of equations is solved within a finite element framework, getting an estimation of the chondrocytes spatial distribution, growth of the diaphysis and formation of the epiphysis of a long bone. The results obtained are qualitatively similar to the actual physiological ones and quantitatively close to some available experimental data. Finally, this extended approach allows finding important relations between the model parameters to get stability of the physiological process and getting additional insight on the spatial and directional distribution of cells and paracrine factors.

  6. Grid cells: the position code, neural network models of activity, and the problem of learning.

    PubMed

    Welinder, Peter E; Burak, Yoram; Fiete, Ila R

    2008-01-01

    We review progress on the modeling and theoretical fronts in the quest to unravel the computational properties of the grid cell code and to explain the mechanisms underlying grid cell dynamics. The goals of the review are to outline a coherent framework for understanding the dynamics of grid cells and their representation of space; to critically present and draw contrasts between recurrent network models of grid cells based on continuous attractor dynamics and independent-neuron models based on temporal interference; and to suggest open questions for experiment and theory.

  7. Development of a three-phrase combustion code for modeling liquid rocket engines

    NASA Technical Reports Server (NTRS)

    Liang, P. Y.; Chang, Y. M.

    1984-01-01

    A two-dimensional/axisymmetric program for the simulation of three-phase reactive flows is presented. The three phases are: a multiple-species gaseous phase, a single-species liquid phase, and a particulate droplet phase. The liquid and gaseous phases are described with continuous Eulerian equations, while the droplets are described in a discrete Lagrangian fashion. The three phases are fully coupled together. An atomization model, a multireaction chemical kinetics model, and a subgrid scale turbulence model completes the description of a general liquid/gas bipropellant combustion process. The code development criteria and qualitative features are highlighted.

  8. Dysregulation of the long non-coding RNA transcriptome in a Rett syndrome mouse model.

    PubMed

    Petazzi, Paolo; Sandoval, Juan; Szczesna, Karolina; Jorge, Olga C; Roa, Laura; Sayols, Sergi; Gomez, Antonio; Huertas, Dori; Esteller, Manel

    2013-07-01

    Mecp2 is a transcriptional repressor protein that is mutated in Rett syndrome, a neurodevelopmental disorder that is the second most common cause of mental retardation in women. It has been shown that the loss of the Mecp2 protein in Rett syndrome cells alters the transcriptional silencing of coding genes and microRNAs. Herein, we have studied the impact of Mecp2 impairment in a Rett syndrome mouse model on the global transcriptional patterns of long non-coding RNAs (lncRNAs). Using a microarray platform that assesses 41,232 unique lncRNA transcripts, we have identified the aberrant lncRNA transcriptome that is present in the brain of Rett syndrome mice. The study of the most relevant lncRNAs altered in the assay highlighted the upregulation of the AK081227 and AK087060 transcripts in Mecp2-null mice brains. Chromatin immunoprecipitation demonstrated the Mecp2 occupancy in the 5'-end genomic loci of the described lncRNAs and its absence in Rett syndrome mice. Most importantly, we were able to show that the overexpression of AK081227 mediated by the Mecp2 loss was associated with the downregulation of its host coding protein gene, the gamma-aminobutyric acid receptor subunit Rho 2 (Gabrr2). Overall, our findings indicate that the transcriptional dysregulation of lncRNAs upon Mecp2 loss contributes to the neurological phenotype of Rett syndrome and highlights the complex interaction between ncRNAs and coding-RNAs.

  9. Modeling steady sea water intrusion with single-density groundwater codes.

    PubMed

    Bakker, Mark; Schaars, Frans

    2013-01-01

    Steady interface flow in heterogeneous aquifer systems is simulated with single-density groundwater codes by using transformed values for the hydraulic conductivity and thickness of the aquifers and aquitards. For example, unconfined interface flow may be simulated with a transformed model by setting the base of the aquifer to sea level and by multiplying the hydraulic conductivity with 41 (for sea water density of 1025 kg/m(3)). Similar transformations are derived for unconfined interface flow with a finite aquifer base and for confined multi-aquifer interface flow. The head and flow distribution are identical in the transformed and original model domains. The location of the interface is obtained through application of the Ghyben-Herzberg formula. The transformed problem may be solved with a single-density code that is able to simulate unconfined flow where the saturated thickness is a linear function of the head and, depending on the boundary conditions, the code needs to be able to simulate dry cells where the saturated thickness is zero. For multi-aquifer interface flow, an additional requirement is that the code must be able to handle vertical leakage in situations where flow in an aquifer is unconfined while there is also flow in the aquifer directly above it. Specific examples and limitations are discussed for the application of the approach with MODFLOW. Comparisons between exact interface flow solutions and MODFLOW solutions of the transformed model domain show good agreement. The presented approach is an efficient alternative to running transient sea water intrusion models until steady state is reached.

  10. Model Development and Verification of the CRIPTE Code for Electromagnetic Coupling

    DTIC Science & Technology

    2005-10-01

    Fig. IIIC-2 (a) TLM equivalent for a shunt node and (b) Topological node representing the TLM mesh node. This shows the slow-wave property of the 2-D...inherent high-pass filtering properties . Incrementing the aperture size also results in the cutoff frequency to shift lower, allowing more penetration...approached. In addition, the delay of the wave from the EMT code reflects the property of the slow-wave 42 Model Development and Verification of the CRIPTE

  11. Results from baseline tests of the SPRE I and comparison with code model predictions

    SciTech Connect

    Cairelli, J.E.; Geng, S.M.; Skupinski, R.C.

    1994-09-01

    The Space Power Research Engine (SPRE), a free-piston Stirling engine with linear alternator, is being tested at the NASA Lewis Research Center as part of the Civil Space Technology Initiative (CSTI) as a candidate for high capacity space power. This paper presents results of base-line engine tests at design and off-design operating conditions. The test results are compared with code model predictions.

  12. User Manual for ATILA, a Finite-Element Code for Modeling Piezoelectric Transducers.

    DTIC Science & Technology

    1987-09-01

    bandwidth of a radiating Tonpilz transducer ", communication L9, 112th ASA meeting, Anaheim (1986). B. HAMONIC, J.C. DEBUS, J.N. DECARPIGNY, "Analyse modale...generally well suited to the modelling of Tonpilz type transducers and allows large savings of CPU time 15 . ... ... ..... .. .. o...1USER NAIWL FOR RTILA A FINITE-ELENENT CODE F=RLMODELING PIEZOELECTRIC TRANSDUCERS (U) iMRL 9" POSTGRADUATE SCHOOL MONTEREY CA J DECRRPXSKY ET AL.U

  13. Probing systematic model dependence of complete fusion for reactions with the weakly bound projectiles Li,76

    NASA Astrophysics Data System (ADS)

    Kundu, A.; Santra, S.; Pal, A.; Chattopadhyay, D.; Nayak, B. K.; Saxena, A.; Kailas, S.

    2016-07-01

    Background: Complete fusion cross section measurements involving weakly bound projectiles show suppression at above-barrier energies compared to coupled-channels (CC) calculations, but no definite conclusion could be drawn for sub-barrier energies. Different CC models often lead to contrasting results. Purpose: We aim to investigate the differences in the fusion cross sections predicted by commonly used CC calculations, using codes such as fresco and ccfull, when compared to experimental data. Methods: The fusion cross sections are normalized to a dimensionless form by isolating the effect of only dynamic channel couplings calculated by both fresco and ccfull, by the method of fusion functions, and compared to a universal fusion function. This acts as a probe for obtaining the model dependence of fusion. Results: A difference is observed between the predictions of fresco and ccfull for all the reactions involving Li,76 as projectiles, and it is noticeably more for systems involving 7Li. Conclusions: With the theoretical foundations of the two CC models being different, their calculation of fusion is different even for the same system. The conclusion about the enhancement or suppression of fusion cross sections is model dependent.

  14. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  15. Advanced Pellet Cladding Interaction Modeling Using the US DOE CASL Fuel Performance Code: Peregrine

    SciTech Connect

    Jason Hales; Various

    2014-06-01

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermomechanical- chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.

  16. Advanced Pellet-Cladding Interaction Modeling using the US DOE CASL Fuel Performance Code: Peregrine

    SciTech Connect

    Montgomery, Robert O.; Capps, Nathan A.; Sunderland, Dion J.; Liu, Wenfeng; Hales, Jason; Stanek, Chris; Wirth, Brian D.

    2014-06-15

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermo-mechanical-chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.

  17. Theoretical modeling of laser-induced plasmas using the ATOMIC code

    NASA Astrophysics Data System (ADS)

    Colgan, James; Johns, Heather; Kilcrease, David; Judge, Elizabeth; Barefield, James, II; Clegg, Samuel; Hartig, Kyle

    2014-10-01

    We report on efforts to model the emission spectra generated from laser-induced breakdown spectroscopy (LIBS). LIBS is a popular and powerful method of quickly and accurately characterizing unknown samples in a remote manner. In particular, LIBS is utilized by the ChemCam instrument on the Mars Science Laboratory. We model the LIBS plasma using the Los Alamos suite of atomic physics codes. Since LIBS plasmas generally have temperatures of somewhere between 3000 K and 12000 K, the emission spectra typically result from the neutral and singly ionized stages of the target atoms. We use the Los Alamos atomic structure and collision codes to generate sets of atomic data and use the plasma kinetics code ATOMIC to perform LTE or non-LTE calculations that generate level populations and an emission spectrum for the element of interest. In this presentation we compare the emission spectrum from ATOMIC with an Fe LIBS laboratory-generated plasma as well as spectra from the ChemCam instrument. We also discuss various physics aspects of the modeling of LIBS plasmas that are necessary for accurate characterization of the plasma, such as multi-element target composition effects, radiation transport effects, and accurate line shape treatments. The Los Alamos National Laboratory is operated by Los Alamos National Security, LLC for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC5206NA25396.

  18. Modeling of Transmittance Degradation Caused by Optical Surface Contamination by Atomic Oxygen Reaction with Adsorbed Silicones

    NASA Technical Reports Server (NTRS)

    Snyder, Aaron; Banks, Bruce; Miller, Sharon; Stueber, Thomas; Sechkar, Edward

    2001-01-01

    A numerical procedure is presented to calculate transmittance degradation caused by contaminant films on spacecraft surfaces produced through the interaction of orbital atomic oxygen (AO) with volatile silicones and hydrocarbons from spacecraft components. In the model, contaminant accretion is dependent on the adsorption of species, depletion reactions due to gas-surface collisions, desorption, and surface reactions between AO and silicone producing SiO(x), (where x is near 2). A detailed description of the procedure used to calculate the constituents of the contaminant layer is presented, including the equations that govern the evolution of fractional coverage by specie type. As an illustrative example of film growth, calculation results using a prototype code that calculates the evolution of surface coverage by specie type is presented and discussed. An example of the transmittance degradation caused by surface interaction of AO with deposited contaminant is presented for the case of exponentially decaying contaminant flux. These examples are performed using hypothetical values for the process parameters.

  19. A Fundamental Limitation of the Conjunctive Codes Learned in PDP Models of Cognition: Comment on Botvinick and Plaut (2006)

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Damian, Markus F.; Davis, Colin J.

    2009-01-01

    A central claim shared by most recent models of short-term memory (STM) is that item knowledge is coded independently from order in long-term memory (LTM; e.g., the letter A is coded by the same representational unit whether it occurs at the start or end of a sequence). Serial order is computed by dynamically binding these item codes to a separate…

  20. Model reactions and natural occurrence of furans from hypersaline environments

    NASA Astrophysics Data System (ADS)

    Krause, T.; Tubbesing, C.; Benzing, K.; Schöler, H. F.

    2014-05-01

    Volatile organic compounds like furan and its derivatives are important for atmospheric properties and reactions. In this work the known abiotic formation of furan from catechol under Fenton-like conditions with Fe3+ sulfate was revised by the use of a bispidine Fe2+ complex as a model compound for iron with well-known characteristics. While total yields were comparable to those with the Fe3+ salt, the bispidine Fe2+ complex is a better catalyst as the turnover numbers of the active iron species were higher. Additionally, the role of iron and pH is discussed in relation to furan formation from model compounds and in natural sediment and water samples collected from the Dead Sea and several salt lakes in Western Australia. Various alkylated furans and even traces of halogenated furans (3-chlorofuran and 3-bromofuran) were found in some Australian samples. 3-chlorofuran was found in three sediments and four water samples, whereas 3-bromofuran was detected in three water samples. Further, the emission of furans is compared to the abundance of several possible precursors such as isoprene and aromatic hydrocarbons as well as to the related thiophenes. It is deduced that the emissions of volatile organic compounds such as furans contribute to the formation of ultra-fine particles in the vicinity of salt lakes and are important for the local climate.