Sample records for relap5 input model

  1. IJS procedure for RELAP5 to TRACE input model conversion using SNAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prosek, A.; Berar, O. A.

    2012-07-01

    The TRAC/RELAP Advanced Computational Engine (TRACE) advanced, best-estimate reactor systems code developed by the U.S. Nuclear Regulatory Commission comes with a graphical user interface called Symbolic Nuclear Analysis Package (SNAP). Much of efforts have been done in the past to develop the RELAP5 input decks. The purpose of this study is to demonstrate the Institut 'Josef Stefan' (IJS) conversion procedure from RELAP5 to TRACE input model of BETHSY facility. The IJS conversion procedure consists of eleven steps and is based on the use of SNAP. For calculations of the selected BETHSY 6.2TC test the RELAP5/MOD3.3 Patch 4 and TRACE V5.0more » Patch 1 were used. The selected BETHSY 6.2TC test was 15.24 cm equivalent diameter horizontal cold leg break in the reference pressurized water reactor without high pressure and low pressure safety injection. The application of the IJS procedure for conversion of BETHSY input model showed that it is important to perform the steps in proper sequence. The overall calculated results obtained with TRACE using the converted RELAP5 model were close to experimental data and comparable to RELAP5/MOD3.3 calculations. Therefore it can be concluded, that proposed IJS conversion procedure was successfully demonstrated on the BETHSY integral test facility input model. (authors)« less

  2. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul David

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  3. Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putney, J.M.; Preece, R.J.

    1993-06-01

    An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less

  4. Modeling moving systems with RELAP5-3D

    DOE PAGES

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  5. Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coryell, E.W.; Siefken, L.J.; Harvego, E.A.

    1997-07-01

    The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less

  6. An Update on Improvements to NiCE Support for RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alex; Wojtowicz, Anna; Deyton, Jordan H.

    The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a framework that facilitates the development of applications that rely on finite-element analysis to solve a coupled, nonlinear system of partial differential equations. RELAP-7 represents an update to the venerable RELAP-5 simulator that is built upon this framework and attempts to model the balance-of-plant concerns in a full nuclear plant. This report details the continued support and integration of RELAP-7 and the NEAMS Integrated Computational Environment (NiCE). RELAP-7 is fully supported by the NiCE due to on-going work to tightly integrate NiCE with the MOOSE framework, and subsequently the applications built upon it.more » NiCE development throughout the first quarter of FY15 has focused on improvements, bug fixes, and feature additions to existing MOOSE-based application support. Specifically, this report will focus on improvements to the NiCE MOOSE Model Builder, the MOOSE application job launcher, and the 3D Nuclear Plant Viewer. This report also includes a comprehensive tutorial that guides RELAP-7 users through the basic NiCE workflow: from input generation and 3D Plant modeling, to massively parallel job launch and post-simulation data visualization.« less

  7. IMPLEMENTATION AND VALIDATION OF A FULLY IMPLICIT ACCUMULATOR MODEL IN RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    2016-01-01

    This paper presents the implementation and validation of an accumulator model in RELAP-7 under the framework of preconditioned Jacobian free Newton Krylov (JFNK) method, based on the similar model used in RELAP5. RELAP-7 is a new nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). RELAP-7 is a fully implicit system code. The JFNK and preconditioning methods used in RELAP-7 is briefly discussed. The slightly modified accumulator model is summarized for completeness. The implemented model was validated with LOFT L3-1 test and benchmarked with RELAP5 results. RELAP-7 and RELAP5 had almost identical results for themore » accumulator gas pressure and water level, although there were some minor difference in other parameters such as accumulator gas temperature and tank wall temperature. One advantage of the JFNK method is its easiness to maintain and modify models due to fully separation of numerical methods from physical models. It would be straightforward to extend the current RELAP-7 accumulator model to simulate the advanced accumulator design.« less

  8. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  9. Metal-water reaction and cladding deformation models for RELAP5/MOD3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caraher, D.L.; Shumway, R.W.

    1989-06-01

    A model for calculating the reaction of zirconium with steam according to the Cathcart-Pawel correlation has been incorporated into RELAP5/MOD3. A cladding deformation model which computes swelling and rupture of the cladding according to the empirical correlations for Powers and Meyer has also been incorporated into RELAP5/MOD3. This report gives the background of the models, documents their implantation into the RELAP5 subroutines, and reports the developmental assessment done on the models. 4 refs., 9 figs., 9 tabs.

  10. RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H

    2010-06-01

    ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less

  11. RAVEN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-10-01

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters that need tomore » be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN has started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assessment capability to RELAP-7, currently under development at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activaties are currently ongoing for coupling RAVEN with software such as RELAP5-3D, etc. The aim of this document is the explanation of the input requirements, focalizing on the input structure.« less

  12. RAVEN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph

    2016-02-01

    RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters that need tomore » be perturbed are accessible by input files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assessment capability to RELAP-7, currently under development at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activates are currently ongoing for coupling RAVEN with software such as RELAP5-3D, etc. The aim of this document is the explanation of the input requirements, focusing on the input structure.« less

  13. RAVEN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph

    2017-03-01

    RAVEN is a generic software framework to perform parametric and probabilistic analy- sis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncer- tainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters thatmore » need to be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused to- ward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN has started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is in- terested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assess- ment capability to RELAP-7, currently under develop-ment at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activates are currently ongoing for coupling RAVEN with soft- ware such as RELAP5-3D, etc. The aim of this document is the explaination of the input requirements, focalizing on the input structure.« less

  14. Break modeling for RELAP5 analyses of ISP-27 Bethsy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petelin, S.; Gortnar, O.; Mavko, B.

    This paper presents pre- and posttest analyses of International Standard Problem (ISP) 27 on the Bethsy facility and separate RELAP5 break model tests considering the measured boundary condition at break inlet. This contribution also demonstrates modifications which have assured the significant improvement of model response in posttest simulations. Calculations were performed using the RELAP5/MOD2/36.05 and RELAP5/MOD3.5M5 codes on the MicroVAX, SUN, and CONVEX computers. Bethsy is an integral test facility that simulates a typical 900-MW (electric) Framatome pressurized water reactor. The ISP-27 scenario involves a 2-in. cold-leg break without HPSI and with delayed operator procedures for secondary system depressurization.

  15. Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor

    DOE PAGES

    Hu, Po; Wilson, Paul

    2014-01-01

    The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in themore » code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.« less

  16. Posttest analysis of LOFT LOCE L2-3 using the ESA RELAP4 blowdown model. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perryman, J.L.; Samuels, T.K.; Cooper, C.H.

    A posttest analysis of the blowdown portion of Loss-of-Coolant Experiment (LOCE) L2-3, which was conducted in the Loss-of-Fluid Test (LOFT) facility, was performed using the experiment safety analysis (ESA) RELAP4/MOD5 computer model. Measured experimental parameters were compared with the calculations in order to assess the conservatisms in the ESA RELAP4/MOD5 model.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. George L Mesina

    Our ultimate goal is to create and maintain RELAP5-3D as the best software tool available to analyze nuclear power plants. This begins with writing excellent programming and requires thorough testing. This document covers development of RELAP5-3D software, the behavior of the RELAP5-3D program that must be maintained, and code testing. RELAP5-3D must perform in a manner consistent with previous code versions with backward compatibility for the sake of the users. Thus file operations, code termination, input and output must remain consistent in form and content while adding appropriate new files, input and output as new features are developed. As computermore » hardware, operating systems, and other software change, RELAP5-3D must adapt and maintain performance. The code must be thoroughly tested to ensure that it continues to perform robustly on the supported platforms. The coding must be written in a consistent manner that makes the program easy to read to reduce the time and cost of development, maintenance and error resolution. The programming guidelines presented her are intended to institutionalize a consistent way of writing FORTRAN code for the RELAP5-3D computer program that will minimize errors and rework. A common format and organization of program units creates a unifying look and feel to the code. This in turn increases readability and reduces time required for maintenance, development and debugging. It also aids new programmers in reading and understanding the program. Therefore, when undertaking development of the RELAP5-3D computer program, the programmer must write computer code that follows these guidelines. This set of programming guidelines creates a framework of good programming practices, such as initialization, structured programming, and vector-friendly coding. It sets out formatting rules for lines of code, such as indentation, capitalization, spacing, etc. It creates limits on program units, such as subprograms, functions, and modules. It establishes documentation guidance on internal comments. The guidelines apply to both existing and new subprograms. They are written for both FORTRAN 77 and FORTRAN 95. The guidelines are not so rigorous as to inhibit a programmer’s unique style, but do restrict the variations in acceptable coding to create sufficient commonality that new readers will find the coding in each new subroutine familiar. It is recognized that this is a “living” document and must be updated as languages, compilers, and computer hardware and software evolve.« less

  18. RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Andrs; Ray Berry; Derek Gaston

    The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less

  19. Analyses of 1/15 scale Creare bypass transient experiments. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kmetyk, L.N.; Buxton, L.D.; Cole, R.K. Jr.

    1982-09-01

    RELAP4 analyses of several 1/15 scale Creare H-series bypass transient experiments have been done to investigate the effect of using different downcomer nodalizations, physical scales, slip models, and vapor fraction donoring methods. Most of the analyses were thermal equilibrium calculations performed with RELAP4/MOD5, but a few such calculations were done with RELAP4/MOD6 and RELAP4/MOD7, which contain improved slip models. In order to estimate the importance of nonequilibrium effects, additional analyses were performed with TRAC-PD2, RELAP5 and the nonequilibrium option of RELAP4/MOD7. The purpose of these studies was to determine whether results from Westinghouse's calculation of the Creare experiments, which weremore » done with a UHI-modified version of SATAN, were sufficient to guarantee SATAN would be conservative with respect to ECC bypass in full-scale plant analyses.« less

  20. System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng Lin; Dong Hou; Zhihong Xu

    2006-07-01

    Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, justmore » can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is modeled by RELAP5 code, and its main control and protection system is duplicated by Matlab/Simulink. Some steady states and transients are calculated under control of these I and C systems, and the results are compared with the plant test curves. The application showed that it can do exact system simulation of NPPs by coupling RELAP5 and Matlab/Simulink. This paper will mainly focus on the coupling method, plant thermal-hydraulic model, main control logics, test and application results. (authors)« less

  1. Coupled calculation of the radiological release and the thermal-hydraulic behavior of a 3-loop PWR after a SGTR by means of the code RELAP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Hove, W.; Van Laeken, K.; Bartsoen, L.

    1995-09-01

    To enable a more realistic and accurate calculation of the radiological consequences of a SGTR, a fission product transport model was developed. As the radiological releases strongly depend on the thermal-hydraulic transient, the model was included in the RELAP5 input decks of the Belgian NPPs. This enables the coupled calculation of the thermal-hydraulic transient and the radiological release. The fission product transport model tracks the concentration of the fission products in the primary circuit, in each of the SGs as well as in the condenser. This leads to a system of 6 coupled, first order ordinary differential equations with timemore » dependent coefficients. Flashing, scrubbing, atomisation and dry out of the break flow are accounted for. Coupling with the thermal-hydraulic calculation and correct modelling of the break position enables an accurate calculation of the mixture level above the break. Pre- and post-accident spiking in the primary circuit are introduced. The transport times in the FW-system and the SG blowdown system are also taken into account, as is the decontaminating effect of the primary make-up system and of the SG blowdown system. Physical input parameters such as the partition coefficients, half life times and spiking coefficients are explicitly introduced so that the same model can be used for iodine, caesium and noble gases.« less

  2. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  3. Steady state and LOCA analysis of Kartini reactor using RELAP5/SCDAP code: The role of passive system

    NASA Astrophysics Data System (ADS)

    Antariksawan, Anhar R.; Wahyono, Puradwi I.; Taxwim

    2018-02-01

    Safety is the priority for nuclear installations, including research reactors. On the other hand, many studies have been done to validate the applicability of nuclear power plant based best estimate computer codes to the research reactor. This study aims to assess the applicability of the RELAP5/SCDAP code to Kartini research reactor. The model development, steady state and transient due to LOCA calculations have been conducted by using RELAP5/SCDAP. The calculation results are compared with available measurements data from Kartini research reactor. The results show that the RELAP5/SCDAP model steady state calculation agrees quite well with the available measurement data. While, in the case of LOCA transient simulations, the model could result in reasonable physical phenomena during the transient showing the characteristics and performances of the reactor against the LOCA transient. The role of siphon breaker hole and natural circulation in the reactor tank as passive system was important to keep reactor in safe condition. It concludes that the RELAP/SCDAP could be use as one of the tool to analyse the thermal-hydraulic safety of Kartini reactor. However, further assessment to improve the model is still needed.

  4. An assessment of RELAP5-3D using the Edwards-O'Brien Blowdown problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; Aumiller, D.L.

    1999-07-01

    The RELAP5-3D (version bt) computer code was used to assess the United States Nuclear Regulatory Commission's Standard Problem 1 (Edwards-O'Brien Blowdown Test). The RELAP5-3D standard installation problem based on the Edwards-O'Brien Blowdown Test was modified to model the appropriate initial conditions and to represent the proper location of the instruments present in the experiment. The results obtained using the modified model are significantly different from the original calculation indicating the need to model accurately the experimental conditions if an accurate assessment of the calculational model is to be obtained.

  5. Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, C.E.

    Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.

  6. RELAP-7 Closure Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Berry, R. A.; Martineau, R. C.

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less

  7. PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard

    A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less

  8. Pump-stopping water hammer simulation based on RELAP5

    NASA Astrophysics Data System (ADS)

    Yi, W. S.; Jiang, J.; Li, D. D.; Lan, G.; Zhao, Z.

    2013-12-01

    RELAP5 was originally designed to analyze complex thermal-hydraulic interactions that occur during either postulated large or small loss-of-coolant accidents in PWRs. However, as development continued, the code was expanded to include many of the transient scenarios that might occur in thermal-hydraulic systems. The fast deceleration of the liquid results in high pressure surges, thus the kinetic energy is transformed into the potential energy, which leads to the temporary pressure increase. This phenomenon is called water hammer. Generally water hammer can occur in any thermal-hydraulic systems and it is extremely dangerous for the system when the pressure surges become considerably high. If this happens and when the pressure exceeds the critical pressure that the pipe or the fittings along the pipeline can burden, it will result in the failure of the whole pipeline integrity. The purpose of this article is to introduce the RELAP5 to the simulation and analysis of water hammer situations. Based on the knowledge of the RELAP5 code manuals and some relative documents, the authors utilize RELAP5 to set up an example of water-supply system via an impeller pump to simulate the phenomena of the pump-stopping water hammer. By the simulation of the sample case and the subsequent analysis of the results that the code has provided, we can have a better understand of the knowledge of water hammer as well as the quality of the RELAP5 code when it's used in the water-hammer fields. In the meantime, By comparing the results of the RELAP5 based model with that of other fluid-transient analysis software say, PIPENET. The authors make some conclusions about the peculiarity of RELAP5 when transplanted into water-hammer research and offer several modelling tips when use the code to simulate a water-hammer related case.

  9. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  10. Comparison of MELCOR and SCDAP/RELAP5 results for a low-pressure, short-term station blackout at Browns Ferry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, J.J.

    1995-12-31

    This study compares results obtained with two U.S. Nuclear Regulatory Commission (NRC)-sponsored codes, MELCOR version 1.8.3 (1.8PQ) and SCDAP/RELAP5 Mod3.1 release C, for the same transient - a low-pressure, short-term station blackout accident at the Browns Ferry nuclear plant. This work is part of MELCOR assessment activities to compare core damage progression calculations of MELCOR against SCDAP/RELAP5 since the two codes model core damage progression very differently.

  11. Comparison of the PHISICS/RELAP5-3D ring and block model results for phase I of the OECD/NEA MHTGR-350 benchmark

    DOE PAGES

    Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...

    2015-12-02

    The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less

  12. Methodology, status, and plans for development and assessment of the RELAP5 code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, G.W.; Riemke, R.A.

    1997-07-01

    RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.

  13. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  14. Test prediction for the German PKL Test K5A using RELAP4/MOD6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y.S.; Haigh, W.S.; Sullivan, L.H.

    RELAP4/MOD6 is the most recent modification in the series of RELAP4 computer programs developed to describe the thermal-hydraulic conditions attendant to postulated transients in light water reactor systems. The major new features in RELAP4/MOD6 include best-estimate pressurized water reactor (PWR) reflood transient analytical models for core heat transfer, local entrainment, and core vapor superheat, and a new set of heat transfer correlations for PWR blowdown and reflood. These new features were used for a test prediction of the Kraftwerk Union three-loop PRIMAR KREISLAUF (PKL) Reflood Test K5A. The results of the prediction were in good agreement with the experimental thermalmore » and hydraulic system data. Comparisons include heater rod surface temperature, system pressure, mass flow rates, and core mixture level. It is concluded that RELAP4/MOD6 is capable of accurately predicting transient reflood phenomena in the 200% cold-leg break test configuration of the PKL reflood facility.« less

  15. RELAP-7 Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Ray Alden; Zou, Ling; Zhao, Haihua

    This document summarizes the physical models and mathematical formulations used in the RELAP-7 code. In summary, the MOOSE based RELAP-7 code development is an ongoing effort. The MOOSE framework enables rapid development of the RELAP-7 code. The developmental efforts and results demonstrate that the RELAP-7 project is on a path to success. This theory manual documents the main features implemented into the RELAP-7 code. Because the code is an ongoing development effort, this RELAP-7 Theory Manual will evolve with periodic updates to keep it current with the state of the development, implementation, and model additions/revisions.

  16. Comparison of the PHISICS/RELAP5-3D Ring and Block Model Results for Phase I of the OECD MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2014-04-01

    The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less

  17. RELAP-7 Code Assessment Plan and Requirement Traceability Matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.

    2016-10-01

    The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less

  18. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  19. RELAP5-3D Modeling of Heat Transfer Components (Intermediate Heat Exchanger and Helical-Coil Steam Generator) for NGNP Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. A. Anderson; P. Sabharwall

    2014-01-01

    The Next Generation Nuclear Plant project is aimed at the research and development of a helium-cooled high-temperature gas reactor that could generate both electricity and process heat for the production of hydrogen. The heat from the high-temperature primary loop must be transferred via an intermediate heat exchanger to a secondary loop. Using RELAP5-3D, a model was developed for two of the heat exchanger options a printed-circuit heat exchanger and a helical-coil steam generator. The RELAP5-3D models were used to simulate an exponential decrease in pressure over a 20 second period. The results of this loss of coolant analysis indicate thatmore » heat is initially transferred from the primary loop to the secondary loop, but after the decrease in pressure in the primary loop the heat is transferred from the secondary loop to the primary loop. A high-temperature gas reactor model should be developed and connected to the heat transfer component to simulate other transients.« less

  20. Peer review of RELAP5/MOD3 documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craddick, W.G.

    1993-12-31

    A peer review was performed on a portion of the documentation of the RELAP5/MOD3 computer code. The review was performed in two phases. The first phase was a review of Volume 3, Developmental Assessment problems, and Volume 4, Models and Correlations. The reviewers for this phase were Dr. Peter Griffith, Dr. Yassin Hassan, Dr. Gerald S. Lellouche, Dr. Marino di Marzo and Mr. Mark Wendel. The reviewers recommended a number of improvements, including using a frozen version of the code for assessment guided by a validation plan, better justification for flow regime maps and extension of models beyond their datamore » base. The second phase was a review of Volume 6, Quality Assurance of Numerical Techniques in RELAP5/MOD3. The reviewers for the second phase were Mr. Mark Wendel and Dr. Paul T. Williams. Recommendations included correction of numerous grammatical and typographical errors and better justification for the use of Lax`s Equivalence Theorem.« less

  1. THERMAL DESIGN OF THE ITER VACUUM VESSEL COOLING SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, Juan J; Yoder Jr, Graydon L; Kim, Seokho H

    RELAP5-3D models of the ITER Vacuum Vessel (VV) Primary Heat Transfer System (PHTS) have been developed. The design of the cooling system is described in detail, and RELAP5 results are presented. Two parallel pump/heat exchanger trains comprise the design one train is for full-power operation and the other is for emergency operation or operation at decay heat levels. All the components are located inside the Tokamak building (a significant change from the original configurations). The results presented include operation at full power, decay heat operation, and baking operation. The RELAP5-3D results confirm that the design can operate satisfactorily during bothmore » normal pulsed power operation and decay heat operation. All the temperatures in the coolant and in the different system components are maintained within acceptable operating limits.« less

  2. Thermal-hydraulic simulation of natural convection decay heat removal in the High Flux Isotope Reactor (HFIR) using RELAP5 and TEMPEST: Part 2, Interpretation and validation of results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggles, A.E.; Morris, D.G.

    The RELAP5/MOD2 code was used to predict the thermal-hydraulic behavior of the HFIR core during decay heat removal through boiling natural circulation. The low system pressure and low mass flux values associated with boiling natural circulation are far from conditions for which RELAP5 is well exercised. Therefore, some simple hand calculations are used herein to establish the physics of the results. The interpretation and validation effort is divided between the time average flow conditions and the time varying flow conditions. The time average flow conditions are evaluated using a lumped parameter model and heat balance. The Martinelli-Nelson correlations are usedmore » to model the two-phase pressure drop and void fraction vs flow quality relationship within the core region. Systems of parallel channels are susceptible to both density wave oscillations and pressure drop oscillations. Periodic variations in the mass flux and exit flow quality of individual core channels are predicted by RELAP5. These oscillations are consistent with those observed experimentally and are of the density wave type. The impact of the time varying flow properties on local wall superheat is bounded herein. The conditions necessary for Ledinegg flow excursions are identified. These conditions do not fall within the envelope of decay heat levels relevant to HFIR in boiling natural circulation. 14 refs., 5 figs., 1 tab.« less

  3. Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stosic, Z.; Preusche, G.

    1996-08-01

    In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less

  4. Posttest RELAP4 analysis of LOFT experiment L1-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grush, W.H.; Holmstrom, H.L.O.

    Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less

  5. RAVEN Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego

    2016-06-01

    RAVEN is a software framework able to perform parametric and stochastic analysis based on the response of complex system codes. The initial development was aimed at providing dynamic risk analysis capabilities to the thermohydraulic code RELAP-7, currently under development at Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose stochastic and uncertainty quantification platform, capable of communicating with any system code. In fact, the provided Application Programming Interfaces (APIs) allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by input filesmore » or via python interfaces. RAVEN is capable of investigating system response and explore input space using various sampling schemes such as Monte Carlo, grid, or Latin hypercube. However, RAVEN strength lies in its system feature discovery capabilities such as: constructing limit surfaces, separating regions of the input space leading to system failure, and using dynamic supervised learning techniques. The development of RAVEN started in 2012 when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework arose. RAVEN’s principal assignment is to provide the necessary software and algorithms in order to employ the concepts developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just to identify the frequency of an event potentially leading to a system failure, but the proximity (or lack thereof) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. peak pressure in a pipe) is exceeded under certain conditions. Most of the capabilities, implemented having RELAP-7 as a principal focus, are easily deployable to other system codes. For this reason, several side activates have been employed (e.g. RELAP5-3D, any MOOSE-based App, etc.) or are currently ongoing for coupling RAVEN with several different software. The aim of this document is to provide a set of commented examples that can help the user to become familiar with the RAVEN code usage.« less

  6. Identification of limiting case between DBA and SBDBA (CL break area sensitivity): A new model for the boron injection system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez Gonzalez, R.; Petruzzi, A.; D'Auria, F.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and (e.g., oblique Control Rods, Positive Void coefficient) required a developed and validated complex three dimensional (3D) neutron kinetics (NK) coupled thermal hydraulic (TH) model. Reactor shut-down is obtained by oblique CRs and, during accidental conditions, by an emergency shut-down system (JDJ) injecting a highly concentrated boron solution (boron clouds) in the moderator tank, the boron clouds reconstruction is obtained using a CFD (CFX) code calculation. A complete LBLOCA calculation implies the application of the RELAP5-3D{sup C} system code. Within the framework of themore » third Agreement 'NA-SA - Univ. of Pisa' a new RELAP5-3D control system for the boron injection system was developed and implemented in the validated coupled RELAP5-3D/NESTLE model of the Atucha 2 NPP. The aim of this activity is to find out the limiting case (maximum break area size) for the Peak Cladding Temperature for LOCAs under fixed boundary conditions. (authors)« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lv, Q.; Kraus, A.; Hu, R.

    CFD analysis has been focused on important component-level phenomena using STARCCM+ to supplement the system analysis of integral system behavior. A notable area of interest was the cavity region. This area is of particular interest for CFD analysis due to the multi-dimensional flow and complex heat transfer (thermal radiation heat transfer and natural convection), which are not simulated directly by RELAP5. CFD simulations allow for the estimation of the boundary heat flux distribution along the riser tubes, which is needed in the RELAP5 simulations. The CFD results can also provide additional data to help establish what level of modeling detailmore » is necessary in RELAP5. It was found that the flow profiles in the cavity region are simpler for the water-based concept than for the air-cooled concept. The local heat flux noticeably increases axially, and is higher in the fins than in the riser tubes. These results were utilized in RELAP5 simulations as boundary conditions, to provide better temperature predictions in the system level analyses. It was also determined that temperatures were higher in the fins than the riser tubes, but within design limits for thermal stresses. Higher temperature predictions were identified in the edge fins, in part due to additional thermal radiation from the side cavity walls.« less

  8. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pecchia, M.; D'Auria, F.; Mazzantini, O.

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less

  9. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandiamore » National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The cases with two-phase flow at the turbine inlet will be pursued in future work.« less

  10. RELAP-7 Progress Report. FY-2015 Optimization Activities Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Ray Alden; Zou, Ling; Andrs, David

    2015-09-01

    This report summarily documents the optimization activities on RELAP-7 for FY-2015. It includes the migration from the analytical stiffened gas equation of state for both the vapor and liquid phases to accurate and efficient property evaluations for both equilibrium and metastable (nonequilibrium) states using the Spline-Based Table Look-up (SBTL) method with the IAPWS-95 properties for steam and water. It also includes the initiation of realistic closure models based, where appropriate, on the U.S. Nuclear Regulatory Commission’s TRACE code. It also describes an improved entropy viscosity numerical stabilization method for the nonequilibrium two-phase flow model of RELAP-7. For ease of presentationmore » to the reader, the nonequilibrium two-phase flow model used in RELAP-7 is briefly presented, though for detailed explanation the reader is referred to RELAP-7 Theory Manual [R.A. Berry, J.W. Peterson, H. Zhang, R.C. Martineau, H. Zhao, L. Zou, D. Andrs, “RELAP-7 Theory Manual,” Idaho National Laboratory INL/EXT-14-31366(rev. 1), February 2014].« less

  11. RELAP5 Application to Accident Analysis of the NIST Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baek, J.; Cuadra Gascon, A.; Cheng, L.Y.

    Detailed safety analyses have been performed for the 20 MW D{sub 2}O moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The time-dependent analysis of the primary system is determined with a RELAP5 transient analysis model that includes the reactor vessel, the pump, heat exchanger, fuel element geometry, and flow channels for both the six inner and twenty-four outer fuel elements. A post-processing of the simulation results has been conducted to evaluate minimum critical heat flux ratio (CHFR) using the Sudo-Kaminaga correlation. Evaluations are performed for the following accidents: (1) the control rod withdrawal startup accidentmore » and (2) the maximum reactivity insertion accident. In both cases the RELAP5 results indicate that there is adequate margin to CHF and no damage to the fuel will occur because of sufficient coolant flow through the fuel channels and the negative scram reactivity insertion.« less

  12. Posttest RELAP5 simulations of the Semiscale S-UT series experiments. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leonard, M.T.

    The RELAP5/MOD1 computer code was used to perform posttest calculations, simulating six experiments, run in the Semiscale Mod-2A facility, investigating the effects of upper head injection on small break transient behavior. The results of these calculations and corresponding test data are presented in this report. An evaluation is made of the capability of RELAP5 to calculate the thermal-hydraulic response of the Mod-2A system over a spectrum of break sizes, with and without the use of upper head injection.

  13. RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul D.

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  14. RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul David

    2015-10-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  15. RELAP-7 Development Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Zhao, Haihua; Gleicher, Frederick Nathan

    RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory, and is the next generation tool in the RELAP reactor safety/systems analysis application series. RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is beingmore » developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). During Fiscal Year (FY) 2015, the RELAP-7 code has been further improved with expanded capability to support boiling water reactor (BWR) and pressurized water reactor NPPs analysis. The accumulator model has been developed. The code has also been coupled with other MOOSE-based applications such as neutronics code RattleSnake and fuel performance code BISON to perform multiphysics analysis. A major design requirement for the implicit algorithm in RELAP-7 is that it is capable of second-order discretization accuracy in both space and time, which eliminates the traditional first-order approximation errors. The second-order temporal is achieved by a second-order backward temporal difference, and the one-dimensional second-order accurate spatial discretization is achieved with the Galerkin approximation of Lagrange finite elements. During FY-2015, we have done numerical verification work to verify that the RELAP-7 code indeed achieves 2nd-order accuracy in both time and space for single phase models at the system level.« less

  16. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  17. Simulation of a small cold-leg-break experiment at the PMK-2 test facility using the RELAP5 and ATHLET codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ezsoel, G.; Guba, A.; Perneczky, L.

    Results of a small-break loss-of-coolant accident experiment, conducted on the PMK-2 integral-type test facility are presented. The experiment simulated a 1% break in the cold leg of a VVER-440-type reactor. The main phenomena of the experiment are discussed, and in the case of selected events, a more detailed interpretation with the help of measured void fraction, obtained by a special measurement device, is given. Two thermohydraulic computer codes, RELAP5 and ATHLET, are used for posttest calculations. The aim of these calculations is to investigate the code capability for modeling natural circulation phenomena in VVER-440-type reactors. Therefore, the results of themore » experiment and both calculations are compared. Both codes predict most of the transient events well, with the exception that RELAP5 fails to predict the dryout period in the core. In the experiment, the hot- and cold-leg loop-seal clearing is accompanied by natural circulation instabilities, which can be explained by means of the ATHLET calculation.« less

  18. [Mechanisms of myeloid cell RelA/p65 in cigarette smoking-induced lung cancer growth in mice].

    PubMed

    Yao, Yiwen; Wu, Junlu; Quan, Wenqiang; Zhou, Hong; Zhang, Yu; Wan, Haiying; Li, Dong

    2014-06-01

    The aim of this study was to investigate the mechanism of cigarette smoking (CS)-induced lung cancer growth in mice. RelA/p65⁻/⁻ mice and WT mice were used to establish mouse models of lung cancer. Both mice were divided into two groups: air group and CS group, respectively. Tumor number on the lung surface was counted and maximal tumor size was evaluated using HE staining. Kaplan Meier (K-M) survival curve was used to analyze the survival rate of the mice. Expression of Ki-67, TNF-α and CD68 in the tumor tissue was determined by immunohistochemical analysis, and cyclin D1 and c-myc proteins were examined by Western blot. Apoptosis of tumor cells was analyzed using TUNEL staining. The concentrations of inflammatory cytokines TNF-α, IL-6 and KC in the mouse lung tissues were evaluated by ELISA. Compared with the WT air group, the lung weight, lung tumor multiplicity, as well as maximum tumor size in the WT mice exposed to CS were (1.5 ± 0.1)g, (64.8 ± 4.1) and (7.6 ± 0.2) mm, respectively, significantly increased than those in the WT mice not exposed to CS (P < 0.05 for all). However, there were no statistically significant differences between RelA/p65⁻/⁻ mice before and after CS exposure (P > 0.05 for all). Kaplan-Meier survival analysis showed that CS exposure significantly shortened the life time of WT mice (P < 0.05), and deletion of RelA/p65 in myeloid cells resulted in an increased survival compared with that of the WT mice (P < 0.05 for all). The ratios of Ki-67 positive tumor cells were (43.4 ± 2.9)%, (60.6 ± 5.4)%, (12.8 ± 3.6)% and (15.0 ± 4.2)% in the WT air group, WT CS groups, RelA/p65⁻/⁻ air groups and RelA/p65⁻/⁻ CS groups, respectively. After smoking, the number of Ki-67-positive cells was significantly increased in the WT mice (P < 0.05). However, there was no significant difference between the RelA/p65⁻/⁻ groups before and after smoking (P > 0.05). The apoptosis rate of WT air, WT CS, RelA/p65⁻/⁻ air and RelA/p65⁻/⁻ CS groups were (11.6 ± 1.7)%, (13.0 ± 2.0)%, (13.2 ± 2.0)% and (11.0 ± 1.4)%, respectively, with no significant difference among them (P > 0.05). Expression of cyclin D1 and c-myc was induced in response to CS exposure in lung tumor cells of WT mice. In contrast, their expressions were not significantly changed in the RelA/p65⁻/⁻ mice after smoke exposure. CS exposure was associated with an increased number of macrophages infiltrating in the tumor tissue, in both WT and RelA/p65⁻/⁻ mice (P < 0.05). The concentrations of IL-6, KC and TNF-α were significantly increased after CS exposure in the lungs of WT mice (P < 0.05). Cigarette smoking promotes the lung cancer growth in mice. Myeloid cell RelA/p65 mediates CS-induced tumor growth. TNFα regulated by RelA/p65 may be involved in the lung cancer development.

  19. RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2012-06-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less

  20. RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, G.; Epiney, A. S.

    2012-07-01

    The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dionne, B.; Tzanos, C. P.

    To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model andmore » methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.« less

  2. RELAP-7 Progress Report: A Mathematical Model for 1-D Compressible, Single-Phase Flow Through a Branching Junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, R. A.

    In the literature, the abundance of pipe network junction models, as well as inclusion of dissipative losses between connected pipes with loss coefficients, has been treated using the incompressible flow assumption of constant density. This approach is fundamentally, physically wrong for compressible flow with density change. This report introduces a mathematical modeling approach for general junctions in piping network systems for which the transient flows are compressible and single-phase. The junction could be as simple as a 1-pipe input and 1-pipe output with differing pipe cross-sectional areas for which a dissipative loss is necessary, or it could include an activemore » component, between an inlet pipe and an outlet pipe, such as a pump or turbine. In this report, discussion will be limited to the former. A more general branching junction connecting an arbitrary number of pipes with transient, 1-D compressible single-phase flows is also presented. These models will be developed in a manner consistent with the use of a general equation of state like, for example, the recent Spline-Based Table Look-up method [1] for incorporating the IAPWS-95 formulation [2] to give accurate and efficient calculations for properties for water and steam with RELAP-7 [3].« less

  3. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inletmore » velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input model was developed to test the Terry turbine RCIC system, which generates reasonable results. Both the INL RCIC model and the Sandia RCIC model produce results matching major rated parameters such as the rotational speed, pump torque, and the turbine shaft work for the normal operation condition. The Sandia model is more sensitive to the turbine outlet pressure than the INL model. The next step will be further refining the Terry turbine models by including two-phase flow cases so that off-design conditions can be simulated. The pump model could also be enhanced with the use of the homologous curves.« less

  4. The probability of containment failure by direct containment heating in Zion. Supplement 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, M.M.; Allen, M.D.; Stamps, D.W.

    1994-12-01

    Supplement 1 of NUREG/CR-6075 brings to closure the DCH issue for the Zion plant. It includes the documentation of the peer review process for NUREG/CR-6075, the assessments of four new splinter scenarios defined in working group meetings, and modeling enhancements recommended by the working groups. In the four new scenarios, consistency of the initial conditions has been implemented by using insights from systems-level codes. SCDAP/RELAP5 was used to analyze three short-term station blackout cases with Different lead rates. In all three case, the hot leg or surge line failed well before the lower head and thus the primary system depressurizedmore » to a point where DCH was no longer considered a threat. However, these calculations were continued to lower head failure in order to gain insights that were useful in establishing the initial and boundary conditions. The most useful insights are that the RCS pressure is-low at vessel breach metallic blockages in the core region do not melt and relocate into the lower plenum, and melting of upper plenum steel is correlated with hot leg failure. THE SCDAP/RELAP output was used as input to CONTAIN to assess the containment conditions at vessel breach. The containment-side conditions predicted by CONTAIN are similar to those originally specified in NUREG/CR-6075.« less

  5. Modeling of the Edwards pipe experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiselj, I.; Petelin, S.

    1995-12-31

    The Edwards pipe experiment is used as one of the basic benchmarks for the two-phase flow codes due to its simple geometry and the wide range of phenomena that it covers. Edwards and O`Brien filled 4-m-long pipe with liquid water at 7 MPa and 502 K and ruptured one end of the tube. They measured pressure and void fraction during the blowdown. Important phenomena observed were pressure rarefaction wave, flashing onset, critical two-phase flow, and void fraction wave. Experimental data were used to analyze the capabilities of the RELAP5/MOD3.1 six-equation two-phase flow model and to examine two different numerical schemes:more » one from the RELAP5/MOD3.1 code and one from our own code, which was based on characteristic upwind discretization.« less

  6. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less

  7. Condensation model for the ESBWR passive condensers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Revankar, S. T.; Zhou, W.; Wolf, B.

    2012-07-01

    In the General Electric's Economic simplified boiling water reactor (GE-ESBWR) the passive containment cooling system (PCCS) plays a major role in containment pressure control in case of an loss of coolant accident. The PCCS condenser must be able to remove sufficient energy from the reactor containment to prevent containment from exceeding its design pressure following a design basis accident. There are three PCCS condensation modes depending on the containment pressurization due to coolant discharge; complete condensation, cyclic venting and flow through mode. The present work reviews the models and presents model predictive capability along with comparison with existing data frommore » separate effects test. The condensation models in thermal hydraulics code RELAP5 are also assessed to examine its application to various flow modes of condensation. The default model in the code predicts complete condensation well, and basically is Nusselt solution. The UCB model predicts through flow well. None of condensation model in RELAP5 predict complete condensation, cyclic venting, and through flow condensation consistently. New condensation correlations are given that accurately predict all three modes of PCCS condensation. (authors)« less

  8. BISON Modeling of Reactivity-Initiated Accident Experiments in a Static Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folsom, Charles P.; Jensen, Colby B.; Williamson, Richard L.

    2016-09-01

    In conjunction with the restart of the TREAT reactor and the design of test vehicles, modeling and simulation efforts are being used to model the response of Accident Tolerant Fuel (ATF) concepts under reactivity insertion accident (RIA) conditions. The purpose of this work is to model a baseline case of a 10 cm long UO2-Zircaloy fuel rodlet using BISON and RELAP5 over a range of energy depositions and with varying reactor power pulse widths. The results show the effect of varying the pulse width and energy deposition on both thermal and mechanical parameters that are important for predicting failure ofmore » the fuel rodlet. The combined BISON/RELAP5 model captures coupled thermal and mechanical effects on the fuel-to-cladding gap conductance, cladding-to-coolant heat transfer coefficient and water temperature and pressure that would not be capable in each code individually. These combined effects allow for a more accurate modeling of the thermal and mechanical response in the fuel rodlet and thermal-hydraulics of the test vehicle.« less

  9. Supplemental Thermal-Hydraulic Transient Analyses of BR2 in Support of Conversion to LEU Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Licht, J.; Dionne, B.; Sikik, E.

    2016-01-01

    Belgian Reactor 2 (BR2) is a research and test reactor located in Mol, Belgium and is primarily used for radioisotope production and materials testing. The Materials Management and Minimization (M3) Reactor Conversion Program of the National Nuclear Security Administration (NNSA) is supporting the conversion of the BR2 reactor from Highly Enriched Uranium (HEU) fuel to Low Enriched Uranium (LEU) fuel. The RELAP5/Mod 3.3 code has been used to perform transient thermal-hydraulic safety analyses of the BR2 reactor to support reactor conversion. A RELAP5 model of BR2 has been validated against select transient BR2 reactor experiments performed in 1963 by showingmore » agreement with measured cladding temperatures. Following the validation, the RELAP5 model was then updated to represent the current use of the reactor; taking into account core configuration, neutronic parameters, trip settings, component changes, etc. Simulations of the 1963 experiments were repeated with this updated model to re-evaluate the boiling risks associated with the currently allowed maximum heat flux limit of 470 W/cm 2 and temporary heat flux limit of 600 W/cm 2. This document provides analysis of additional transient simulations that are required as part of a modern BR2 safety analysis report (SAR). The additional simulations included in this report are effect of pool temperature, reduced steady-state flow rate, in-pool loss of coolant accidents, and loss of external cooling. The simulations described in this document have been performed for both an HEU- and LEU-fueled core.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jun Soo; Choi, Yong Joon

    The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.

  11. Comparison of a RELAP5/MOD2 posttest calculation to the data during the recovery portion of a semiscale single-tube steam generator tube rupture experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, J.C.

    This report discusses the comparisons of a RELAP5 posttest calculation of the recovery portion of the Semiscale Mod-2B test S-SG-1 to the test data. The posttest calculation was performed with the RELAP5/MOD2 cycle 36.02 code without updates. The recovery procedure that was calculated mainly consisted of secondary feed and steam using auxiliary feedwater injection and the atmospheric dump valve of the unaffected steam generator (the steam generator without the tube rupture). A second procedure was initiated after the trends of the secondary feed and steam procedure had been established, and this was to stop the safety injection that had beenmore » provided by two trains of both the charging and high pressure injection systems. The Semiscale Mod-2B configuration is a small scale (1/1705), nonnuclear, instrumented, model of a Westinghouse four-loop pressurized water reactor power plant. S-SG-1 was a single-tube, cold-side, steam generator tube rupture experiment. The comparison of the posttest calculation and data included comparing the general trends and the driving mechanisms of the responses, the phenomena, and the individual responses of the main parameters.« less

  12. Post-test analysis of PIPER-ONE PO-IC-2 experiment by RELAP5/MOD3 codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovalini, R.; D`Auria, F.; Galassi, G.M.

    1996-11-01

    RELAP5/MOD3.1 was applied to the PO-IC-2 experiment performed in PIPER-ONE facility, which has been modified to reproduce typical isolation condenser thermal-hydraulic conditions. RELAP5 is a well known code widely used at the University of Pisa during the past seven years. RELAP5/MOD3.1 was the latest version of the code made available by the Idaho National Engineering Laboratory at the time of the reported study. PIPER-ONE is an experimental facility simulating a General Electric BWR-6 with volume and height scaling ratios of 1/2,200 and 1./1, respectively. In the frame of the present activity a once-through heat exchanger immersed in a pool ofmore » ambient temperature water, installed approximately 10 m above the core, was utilized to reproduce qualitatively the phenomenologies expected for the Isolation Condenser in the simplified BWR (SBWR). The PO-IC-2 experiment is the flood up of the PO-SD-8 and has been designed to solve some of the problems encountered in the analysis of the PO-SD-8 experiment. A very wide analysis is presented hereafter including the use of different code versions.« less

  13. RELAP5-3D Developmental Assessment: Comparison of Versions 4.3.4i and 4.2.1i

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul David

    2015-10-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code using versions 4.3.4i and 4.2.1i. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions changed between these two code versions and can be used to identify cases in which the assessment judgment may need to be changed in Volume III of the code manual. Changes to the assessment judgments made after reviewing allmore » of the assessment cases are also provided.« less

  14. RELAP5-3D Developmental Assessment: Comparison of Versions 4.2.1i and 4.1.3i

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Paul D.

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code using versions 4.2.1i and 4.1.3i. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions changed between these two code versions and can be used to identify cases in which the assessment judgment may need to be changed in Volume III of the code manual. Changes to the assessment judgments made after reviewing allmore » of the assessment cases are also provided.« less

  15. Main steam line break accident simulation of APR1400 using the model of ATLAS facility

    NASA Astrophysics Data System (ADS)

    Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.

    2018-02-01

    A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.

  16. Analysis of steam generator loss-of-feedwater experiments with APROS and RELAP5/MOD3.1 computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Virtanen, E.; Haapalehto, T.; Kouhia, J.

    1995-09-01

    Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less

  17. Analysis of the Space Propulsion System Problem Using RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    diego mandelli; curtis smith; cristian rabiti

    This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less

  18. Modeling and Analysis of Alternative Concept of ITER Vacuum Vessel Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, Juan J; Yoder Jr, Graydon L; Dell'Orco, Giovanni

    2010-01-01

    A RELAP5-3D model of the ITER (Latin for the way ) vacuum vessel (VV) primary heat transfer system has been developed to evaluate a proposed design change that relocates the heat exchangers (HXs) from the exterior of the tokamak building to the interior. This alternative design protects the HXs from external hazards such as wind, tornado, and aircraft crash. The proposed design integrates the VV HXs into a VV pressure suppression system (VVPSS) tank that contains water to condense vapour in case of a leak into the plasma chamber. The proposal is to also use this water as the ultimatemore » sink when removing decay heat from the VV system. The RELAP5-3D model has been run under normal operating and abnormal (decay heat) conditions. Results indicate that this alternative design is feasible, with no effects on the VVPSS tank under normal operation and with tank temperature and pressure increasing under decay heat conditions resulting in a requirement to remove steam generated if the VVPSS tank low pressure must be maintained.« less

  19. Development of fission-products transport model in severe-accident scenarios for Scdap/Relap5

    NASA Astrophysics Data System (ADS)

    Honaiser, Eduardo Henrique Rangel

    The understanding and estimation of the release of fission products during a severe accident became one of the priorities of the nuclear community after 1980, with the events of the Three-mile Island unit 2 (TMI-2), in 1979, and Chernobyl accidents, in 1986. Since this time, theoretical developments and experiments have shown that the primary circuit systems of light water reactors (LWR) have the potential to attenuate the release of fission products, a fact that had been neglected before. An advanced tool, compatible with nuclear thermal-hydraulics integral codes, is developed to predict the retention and physical evolution of the fission products in the primary circuit of LWRs, without considering the chemistry effects. The tool embodies the state-of-the-art models for the involved phenomena as well as develops new models. The capabilities acquired after the implementation of this tool in the Scdap/Relap5 code can be used to increase the accuracy of probability safety assessment (PSA) level 2, enhance the reactor accident management procedures and design new emergency safety features.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki

    A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integratedmore » into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.« less

  1. Applications of the RELAP5 code to the station blackout transients at the Browns Ferry Unit One Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, R.R.; Wagoner, S.R.

    1983-01-01

    As a part of the charter of the Severe Accident Sequence Analysis (SASA) Program, station blackout transients have been analyzed using a RELAP5 model of the Browns Ferry Unit 1 Plant. The task was conducted as a partial fulfillment of the needs of the US Nuclear Regulatory Commission in examining the Unresolved Safety Issue A-44: Station Blackout (1) the station blackout transients were examined (a) to define the equipment needed to maintain a well cooled core, (b) to determine when core uncovery would occur given equipment failure, and (c) to characterize the behavior of the vessel thermal-hydraulics during the stationmore » blackout transients (in part as the plant operator would see it). These items are discussed in the paper. Conclusions and observations specific to the station blackout are presented.« less

  2. Analysis of flow reversal test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, L.Y.; Tichler, P.R.

    A series of tests has been conducted to measure the dryout power associated with a flow transient whereby the coolant in a heated channel undergoes a change in flow direction. An analysis of the test was made with the aid of a system code, RELAP5. A dryout criterion was developed in terms of a time-averaged void fraction calculated by RELAP5 for the heated channel. The dryout criterion was also compared with several CHF correlations developed for the channel geometry.

  3. RELAP-7 Software Verification and Validation Plan - Requirements Traceability Matrix (RTM) Part 2: Code Assessment Strategy, Procedure, and RTM Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jun Soo; Choi, Yong Joon; Smith, Curtis Lee

    2016-09-01

    This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.

  4. Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.

    2016-09-01

    In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less

  5. Simulation of Targets Feeding Pipe Rupture in Wendelstein 7-X Facility Using RELAP5 and COCOSYS Codes

    NASA Astrophysics Data System (ADS)

    Kaliatka, T.; Povilaitis, M.; Kaliatka, A.; Urbonavicius, E.

    2012-10-01

    Wendelstein nuclear fusion device W7-X is a stellarator type experimental device, developed by Max Planck Institute of plasma physics. Rupture of one of the 40 mm inner diameter coolant pipes providing water for the divertor targets during the "baking" regime of the facility operation is considered to be the most severe accident in terms of the plasma vessel pressurization. "Baking" regime is the regime of the facility operation during which plasma vessel structures are heated to the temperature acceptable for the plasma ignition in the vessel. This paper presents the model of W7-X cooling system (pumps, valves, pipes, hydro-accumulators, and heat exchangers), developed using thermal-hydraulic state-of-the-art RELAP5 Mod3.3 code, and model of plasma vessel, developed by employing the lumped-parameter code COCOSYS. Using both models the numerical simulation of processes in W7-X cooling system and plasma vessel has been performed. The results of simulation showed, that the automatic valve closure time 1 s is the most acceptable (no water hammer effect occurs) and selected area of the burst disk is sufficient to prevent pressure in the plasma vessel.

  6. Numerical implementation, verification and validation of two-phase flow four-equation drift flux model with Jacobian-free Newton–Krylov method

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-08-24

    This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less

  7. Thermal-hydraulic analysis of N Reactor graphite and shield cooling system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, J.O.; Schmitt, B.E.

    1988-02-01

    A series of bounding (worst-case) calculations were performed using a detailed hydrodynamic RELAP5 model of the N Reactor graphite and shield cooling system (GSCS). These calculations were specifically aimed to answer issues raised by the Westinghouse Independent Safety Review (WISR) committee. These questions address the operability of the GSCS during a worst-case degraded-core accident that requires the GDCS to mitigate the consequences of the accident. An accident scenario previously developed was designed as the hydrogen-mitigation design-basis accident (HMDBA). Previous HMDBA heat transfer analysis,, using the TRUMP-BD code, was used to define the thermal boundary conditions that the GSDS may bemore » exposed to. These TRUMP/HMDBA analysis results were used to define the bounding operating conditions of the GSCS during the course of an HMDBA transient. Nominal and degraded GSCS scenarios were investigated using RELAP5 within or at the bounds of the HMDBA transient. 10 refs., 42 figs., 10 tabs.« less

  8. Initial Coupling of the RELAP-7 and PRONGHORN Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Ortensi; D. Andrs; A.A. Bingham

    2012-10-01

    Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less

  9. Loss-of-Flow and Loss-of-Pressure Simulations of the BR2 Research Reactor with HEU and LEU Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Licht, J.; Bergeron, A.; Dionne, B.

    2016-01-01

    Belgian Reactor 2 (BR2) is a research and test reactor located in Mol, Belgium and is primarily used for radioisotope production and materials testing. The Materials Management and Minimization (M3) Reactor Conversion Program of the National Nuclear Security Administration (NNSA) is supporting the conversion of the BR2 reactor from Highly Enriched Uranium (HEU) fuel to Low Enriched Uranium (LEU) fuel. The reactor core of BR2 is located inside a pressure vessel that contains 79 channels in a hyperboloid configuration. The core configuration is highly variable as each channel can contain a fuel assembly, a control or regulating rod, an experimentalmore » device, or a beryllium or aluminum plug. Because of this variability, a representative core configuration, based on current reactor use, has been defined for the fuel conversion analyses. The code RELAP5/Mod 3.3 was used to perform the transient thermal-hydraulic safety analyses of the BR2 reactor to support reactor conversion. The input model has been modernized relative to that historically used at BR2 taking into account the best modeling practices developed by Argonne National Laboratory (ANL) and BR2 engineers.« less

  10. Import Manipulate Plot RELAP5/MOD3 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, K. R.

    1999-10-05

    XMGR5 was derived from an XY plotting tool called ACE/gr, which is copyrighted by Paul J. Turner and in the public domain. The interactive version of ACE/GR is xmgr, and includes a graphical interface to the X-windows system. Enhancements to xmgr have been developed which import, manipualate, and plot data from RELAP/MOD3, MELCOR, FRAPCON, and SINDA codes, and NRC databank files. capabilities, include two-phase property table lookup functions, an equation interpreter, arithmetic library functions, and units conversion. Plot titles, labels, legends, and narrative can be displayed using Latin or Cyrillic alphabets.

  11. New Molecular Bridge between RelA/p65 and NF-κB Target Genes via Histone Acetyltransferase TIP60 Cofactor*

    PubMed Central

    Kim, Jung-Woong; Jang, Sang-Min; Kim, Chul-Hong; An, Joo-Hee; Kang, Eun-Jin; Choi, Kyung-Hee

    2012-01-01

    The nuclear factor-κB (NF-κB) family is involved in the expressions of numerous genes, in development, apoptosis, inflammatory responses, and oncogenesis. In this study we identified four NF-κB target genes that are modulated by TIP60. We also found that TIP60 interacts with the NF-κB RelA/p65 subunit and increases its transcriptional activity through protein-protein interaction. Although TIP60 binds with RelA/p65 using its histone acetyltransferase domain, TIP60 does not directly acetylate RelA/p65. However, TIP60 maintained acetylated Lys-310 RelA/p65 levels in the TNF-α-dependent NF-κB signaling pathway. In chromatin immunoprecipitation assay, TIP60 was primarily recruited to the IL-6, IL-8, C-IAP1, and XIAP promoters in TNF-α stimulation followed by acetylation of histones H3 and H4. Chromatin remodeling by TIP60 involved the sequential recruitment of acetyl-Lys-310 RelA/p65 to its target gene promoters. Furthermore, we showed that up-regulated TIP60 expression was correlated with acetyl-Lys-310 RelA/p65 expressions in hepatocarcinoma tissues. Taken together these results suggest that TIP60 is involved in the NF-κB pathway through protein interaction with RelA/p65 and that it modulates the transcriptional activity of RelA/p65 in NF-κB-dependent gene expression. PMID:22249179

  12. Molecular Tagging Velocimetry Development for In-situ Measurement in High-Temperature Test Facility

    NASA Technical Reports Server (NTRS)

    Andre, Matthieu A.; Bardet, Philippe M.; Burns, Ross A.; Danehy, Paul M.

    2015-01-01

    The High Temperature Test Facility, HTTF, at Oregon State University (OSU) is an integral-effect test facility designed to model the behavior of a Very High Temperature Gas Reactor (VHTR) during a Depressurized Conduction Cooldown (DCC) event. It also has the ability to conduct limited investigations into the progression of a Pressurized Conduction Cooldown (PCC) event in addition to phenomena occurring during normal operations. Both of these phenomena will be studied with in-situ velocity field measurements. Experimental measurements of velocity are critical to provide proper boundary conditions to validate CFD codes, as well as developing correlations for system level codes, such as RELAP5 (http://www4vip.inl.gov/relap5/). Such data will be the first acquired in the HTTF and will introduce a diagnostic with numerous other applications to the field of nuclear thermal hydraulics. A laser-based optical diagnostic under development at The George Washington University (GWU) is presented; the technique is demonstrated with velocity data obtained in ambient temperature air, and adaptation to high-pressure, high-temperature flow is discussed.

  13. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Page, R.; Jones, J.R.

    1997-07-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation toolsmore » is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.« less

  14. Comparisons of RELAP5-3D Analyses to Experimental Data from the Natural Convection Shutdown Heat Removal Test Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Hu, Rui; Lisowski, Darius

    2016-04-17

    The Reactor Cavity Cooling System (RCCS) is an important passive safety system being incorporated into the overall safety strategy for high temperature advanced reactor concepts such as the High Temperature Gas- Cooled Reactors (HTGR). The Natural Convection Shutdown Heat Removal Test Facility (NSTF) at Argonne National Laboratory (Argonne) reflects a 1/2-scale model of the primary features of one conceptual air-cooled RCCS design. The project conducts ex-vessel, passive heat removal experiments in support of Department of Energy Office of Nuclear Energy’s Advanced Reactor Technology (ART) program, while also generating data for code validation purposes. While experiments are being conducted at themore » NSTF to evaluate the feasibility of the passive RCCS, parallel modeling and simulation efforts are ongoing to support the design, fabrication, and operation of these natural convection systems. Both system-level and high fidelity computational fluid dynamics (CFD) analyses were performed to gain a complete understanding of the complex flow and heat transfer phenomena in natural convection systems. This paper provides a summary of the RELAP5-3D NSTF model development efforts and provides comparisons between simulation results and experimental data from the NSTF. Overall, the simulation results compared favorably to the experimental data, however, further analyses need to be conducted to investigate any identified differences.« less

  15. Problems with numerical techniques: Application to mid-loop operation transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryce, W.M.; Lillington, J.N.

    1997-07-01

    There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds ofmore » problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.« less

  16. A novel form of the RelA nuclear factor kappaB subunit is induced by and forms a complex with the proto-oncogene c-Myc.

    PubMed Central

    Chapman, Neil R; Webster, Gill A; Gillespie, Peter J; Wilson, Brian J; Crouch, Dorothy H; Perkins, Neil D

    2002-01-01

    Members of both Myc and nuclear factor kappaB (NF-kappaB) families of transcription factors are found overexpressed or inappropriately activated in many forms of human cancer. Furthermore, NF-kappaB can induce c-Myc gene expression, suggesting that the activities of these factors are functionally linked. We have discovered that both c-Myc and v-Myc can induce a previously undescribed, truncated form of the RelA(p65) NF-kappaB subunit, RelA(p37). RelA(p37) encodes the N-terminal DNA binding and dimerization domain of RelA(p65) and would be expected to function as a trans-dominant negative inhibitor of NF-kappaB. Surprisingly, we found that RelA(p37) no longer binds to kappaB elements. This result is explained, however, by the observation that RelA(p37), but not RelA(p65), forms a high-molecular-mass complex with c-Myc. These results demonstrate a previously unknown functional and physical interaction between RelA and c-Myc with many significant implications for our understanding of the role that both proteins play in the molecular events underlying tumourigenesis. PMID:12027803

  17. Development of process control capability through the Browns Ferry Integrated Computer System using Reactor Water Clanup System as an example. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.; Mowrey, J.

    1995-12-01

    This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less

  18. SMR Re-Scaling and Modeling for Load Following Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. George L. Mesina; Steven P. Miller

    The XMGR5 graphing package [1] for drawing RELAP5 [2] plots is being re-written in Java [3]. Java is a robust programming language that is available at no cost for most computer platforms from Sun Microsystems, Inc. XMGR5 is an extension of an XY plotting tool called ACE/gr extended to plot data from several US Nuclear Regulatory Commission (NRC) applications. It is also the most popular graphing package worldwide for making RELAP5 plots. In Section 1, a short review of XMGR5 is given, followed by a brief overview of Java. In Section 2, shortcomings of both tkXMGR [4] and XMGR5 aremore » discussed and the value of converting to Java is given. Details of the conversion to Java are given in Section 3. The progress to date, some conclusions and future work are given in Section 4. Some screen shots of the Java version are shown.« less

  20. Modeling of a Flooding Induced Station Blackout for a Pressurized Water Reactor Using the RISMC Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Prescott, Steven R; Smith, Curtis L

    2011-07-01

    In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less

  1. Demonstration of fully coupled simplified extended station black-out accident simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    2014-10-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The RELAP-7 code develop-ment effort started in October of 2011 and by the end of the second development year, a number of physical components with simplified two phase flow capability have been de-veloped to support the simplified boiling water reactor (BWR) extended station blackout (SBO) analyses. The demonstration case includes the major components for the primary system of a BWR, as well as the safety system components for the safety relief valve (SRV), the reactor core isolation cooling (RCIC)more » system, and the wet well. Three scenar-ios for the SBO simulations have been considered. Since RELAP-7 is not a severe acci-dent analysis code, the simulation stops when fuel clad temperature reaches damage point. Scenario I represents an extreme station blackout accident without any external cooling and cooling water injection. The system pressure is controlled by automatically releasing steam through SRVs. Scenario II includes the RCIC system but without SRV. The RCIC system is fully coupled with the reactor primary system and all the major components are dynamically simulated. The third scenario includes both the RCIC system and the SRV to provide a more realistic simulation. This paper will describe the major models and dis-cuss the results for the three scenarios. The RELAP-7 simulations for the three simplified SBO scenarios show the importance of dynamically simulating the SRVs, the RCIC sys-tem, and the wet well system to the reactor safety during extended SBO accidents.« less

  2. Simulation of German PKL refill/reflood experiment K9A using RELAP4/MOD7. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, M.T.; Davis, C.B.; Behling, S.R.

    This paper describes a RELAP4/MOD7 simulation of West Germany's Kraftwerk Union (KWU) Primary Coolant Loop (PKL) refill/reflood experiment K9A. RELAP4/MOD7, a best-estimate computer program for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This study was the first major simulation using RELAP4/MOD7 since its release by the Idaho National Engineering Laboratory (INEL). The PKL facility is a reduced scale (1:134) representation of a typical West German four-loop 1300 MW pressurized water reactor (PWR). A prototypical scale of the total volume to power ratio wasmore » maintained. The test facility was designed specifically for an experiment simulating the refill/reflood phase of a Loss-of-Coolant Accident (LOCA).« less

  3. Activation of aryl hydrocarbon receptor regulates the LPS/IFNγ-induced inflammatory response by inducing ubiquitin-proteosomal and lysosomal degradation of RelA/p65.

    PubMed

    Domínguez-Acosta, O; Vega, L; Estrada-Muñiz, E; Rodríguez, M S; Gonzalez, F J; Elizondo, G

    2018-06-21

    Several studies have identified the aryl hydrocarbon receptor (AhR) as a negative regulator of the innate and adaptive immune responses. However, the molecular mechanisms by which this transcription factor exerts such modulatory effects are not well understood. Interaction between AhR and RelA/p65 has previously been reported. RelA/p65 is the major NFκB subunit that plays a critical role in immune responses to infection. The aim of the present study was to determine whether the activation of AhR disrupted RelA/p65 signaling in mouse peritoneal macrophages by decreasing its half-life. The data demonstrate that the activation of AhR by TCDD and β-naphthoflavone (β-NF) decreased protein levels of the pro-inflammatory cytokines TFN-α, IL-6 and IL-12 after macrophage activation with LPS/IFNγ. In an AhR-dependent manner, TCDD treatment induces RelA/p65 ubiquitination and proteosomal degradation, an effect dependent on AhR transcriptional activity. Activation of AhR also induced lysosome-like membrane structure formation in mouse peritoneal macrophages and RelA/p65 lysosome-dependent degradation. In conclusion, these results demonstrate that AhR activation promotes RelA/p65 protein degradation through the ubiquitin proteasome system, as well as through the lysosomes, resulting in decreased pro-inflammatory cytokine levels in mouse peritoneal macrophages. Copyright © 2018. Published by Elsevier Inc.

  4. Posttest REALP4 analysis of LOFT experiment L1-3A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, J.R.; Holmstrom, H.L.O.

    This report presents selected results of posttest RELAP4 modeling of LOFT loss-of-coolant experiment L1-3A, a double-ended isothermal cold leg break with lower plenum emergency core coolant injection. Comparisons are presented between the pretest prediction, the posttest analysis, and the experimental data. It is concluded that pressurizer modeling is important for accurately predicting system behavior during the initial portion of saturated blowdown. Using measured initial conditions rather than nominal specified initial conditions did not influence the system model results significantly. Using finer nodalization in the reactor vessel improved the prediction of the system pressure history by minimizing steam condensation effects. Unequalmore » steam condensation between the downcomer and core volumes appear to cause the manometer oscillations observed in both the pretest and posttest RELAP4 analysis.« less

  5. MIST final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gloudemans, J.R.

    1991-08-01

    The multiloop integral system test (MIST) was part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock Wilcox-designed plants. MIST was sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock Wilcox. The unique features of the Babcock Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral system facilities to addresss the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility -- the once-through integralmore » system (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in eleven volumes; Volumes 2 through 8 pertain to groups of Phase 3 tests by type, Volume 9 presents inter-group comparisons. Volume 10 provides comparisons between the RELAP5 MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later, Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include: test advisory grop (TAG) issues; facility scaling and design; test matrix; observations; comparisons of RELAP5 calculations to MIST observations; and MIST versus the TAG issues. 11 refs., 29 figs., 9 tabs.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Llopis, C.; Mendizabal, R.; Perez, J.

    An assessment of RELAP5/MOD2 cycle 36.04 against a load rejection from 100% to 50% power in Vandals II NPP (Spain) is presented. The work is inscribed in the framework of the Spanish contribution to ICAP Project. The model used in the simulation consists of a single loop, a steam generator and a steam line up to the steam header all of them enlarged on a scale of 3:1, and full-scaled reactor vessel and pressurizer. The results of the calculations have been in reasonable agreement with plant measurements.

  7. Verification and Validation Strategy for LWRS Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl M. Stoots; Richard R. Schultz; Hans D. Gougar

    2012-09-01

    One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verifiedmore » and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.« less

  8. Posttest analysis of international standard problem 10 using RELAP4/MOD7. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, M.; Davis, C.B.; Peterson, A.C. Jr.

    RELAP4/MOD7, a best estimate computer code for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This paper evaluates the capability of RELAP4/MOD7 to calculate refill/reflood phenomena. This evaluation uses the data of International Standard Problem 10, which is based on West Germany's KWU PKL refill/reflood experiment K9A. The PKL test facility represents a typical West German four-loop, 1300 MW pressurized water reactor (PWR) in reduced scale while maintaining prototypical volume-to-power ratio. The PKL facility was designed to specifically simulate the refill/reflood phase of amore » hypothetical loss-of-coolant accident (LOCA).« less

  9. Analysis of the SL-1 Accident Using RELAPS5-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francisco, A.D. and Tomlinson, E. T.

    2007-11-08

    On January 3, 1961, at the National Reactor Testing Station, in Idaho Falls, Idaho, the Stationary Low Power Reactor No. 1 (SL-1) experienced a major nuclear excursion, killing three people, and destroying the reactor core. The SL-1 reactor, a 3 MW{sub t} boiling water reactor, was shut down and undergoing routine maintenance work at the time. This paper presents an analysis of the SL-1 reactor excursion using the RELAP5-3D thermal-hydraulic and nuclear analysis code, with the intent of simulating the accident from the point of reactivity insertion to destruction and vaporization of the fuel. Results are presented, along with amore » discussion of sensitivity to some reactor and transient parameters (many of the details are only known with a high level of uncertainty).« less

  10. VICTORIA: A mechanistic model for radionuclide behavior in the reactor coolant system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaperow, J.H.; Bixler, N.E.

    1996-12-31

    VICTORIA is the U.S. Nuclear Regulatory Commission`s (NRC`s) mechanistic, best-estimate code for analysis of fission product release from the core and subsequent transport in the reactor vessel and reactor coolant system. VICTORIA requires thermal-hydraulic data (i.e., temperatures, pressures, and velocities) as input. In the past, these data have been taken from the results of calculations from thermal-hydraulic codes such as SCDAP/RELAP5, MELCOR, and MAAP. Validation and assessment of VICTORIA 1.0 have been completed. An independent peer review of VICTORIA, directed by Brookhaven National Laboratory and supported by experts in the areas of fuel release, fission product chemistry, and aerosol physics,more » has been undertaken. This peer review, which will independently assess the code`s capabilities, is nearing completion with the peer review committee`s final report expected in Dec 1996. A limited amount of additional development is expected as a result of the peer review. Following this additional development, the NRC plans to release VICTORIA 1.1 and an updated and improved code manual. Future plans mainly involve use of the code for plant calculations to investigate specific safety issues as they arise. Also, the code will continue to be used in support of the Phebus experiments.« less

  11. Evidence for actin cytoskeleton-dependent and -independent pathways for RelA/p65 nuclear translocation in endothelial cells.

    PubMed

    Fazal, Fabeha; Minhajuddin, Mohd; Bijli, Kaiser M; McGrath, James L; Rahman, Arshad

    2007-02-09

    Activation of the transcription factor NF-kappaB involves its release from the inhibitory protein IkappaBalpha in the cytoplasm and subsequently, its translocation to the nucleus. Whereas the events responsible for its release have been elucidated, mechanisms regulating the nuclear transport of NF-kappaB remain elusive. We now provide evidence for actin cytoskeleton-dependent and -independent mechanisms of RelA/p65 nuclear transport using the proinflammatory mediators, thrombin and tumor necrosis factor alpha, respectively. We demonstrate that thrombin alters the actin cytoskeleton in endothelial cells and interfering with these alterations, whether by stabilizing or destabilizing the actin filaments, prevents thrombin-induced NF-kappaB activation and consequently, expression of its target gene, ICAM-1. The blockade of NF-kappaB activation occurs downstream of IkappaBalpha degradation and is associated with impaired RelA/p65 nuclear translocation. Importantly, thrombin induces association of RelA/p65 with actin and this interaction is sensitive to stabilization/destabilization of the actin filaments. In parallel studies, stabilizing or destabilizing the actin filaments fails to inhibit RelA/p65 nuclear accumulation and ICAM-1 expression by tumor necrosis factor alpha, consistent with its inability to induce actin filament formation comparable with thrombin. Thus, these studies reveal the existence of actin cytoskeleton-dependent and -independent pathways that may be engaged in a stimulus-specific manner to facilitate RelA/p65 nuclear import and thereby ICAM-1 expression in endothelial cells.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Alfonsi; C. Rabiti; D. Mandelli

    The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less

  13. RELAP5 posttest calculation of IAEA-SPE-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petelin, S.; Mavko, B.; Parzer, I.

    The International Atomic Energy Agency`s Fourth Standard Problem Exercise (IAEA-SPE-4) was performed at the PMK-2 facility. The PMK-2 facility is designed to study processes following small- and medium-size breaks in the primary system and natural circulation in VVER-440 plants. The IAEA-SPE-4 experiment represents a cold-leg side small break, similar to the IAEA-SPE-2, with the exception of the high-pressure safety injection being unavailable, and the secondary side bleed and feed initiation. The break valve was located at the dead end of a vertical downcomer, which in fact simulates a break in the reactor vessel itself, and should be unlikely to happenmore » in a real nuclear power plant (NPP). Three different RELAP5 code versions were used for the transient simulation in order to assess the calculations with test results.« less

  14. Analysis on the Role of RSG-GAS Pool Cooling System during Partial Loss of Heat Sink Accident

    NASA Astrophysics Data System (ADS)

    Susyadi; Endiah, P. H.; Sukmanto, D.; Andi, S. E.; Syaiful, B.; Hendro, T.; Geni, R. S.

    2018-02-01

    RSG-GAS is a 30 MW reactor that is mostly used for radioisotope production and experimental activities. Recently, it is regularly operated at half of its capacity for efficiency reason. During an accident, especially loss of heat sink, the role of its pool cooling system is very important to dump decay heat. An analysis using single failure approach and partial modeling of RELAP5 performed by S. Dibyo, 2010 shows that there is no significant increase in the coolant temperature if this system is properly functioned. However lessons learned from the Fukushima accident revealed that an accident can happen due to multiple failures. Considering ageing of the reactor, in this research the role of pool cooling system is to be investigated for a partial loss of heat sink accident which is at the same time the protection system fails to scram the reactor when being operated at 15 MW. The purpose is to clarify the transient characteristics and the final state of the coolant temperature. The method used is by simulating the system in RELAP5 code. Calculation results shows the pool cooling systems reduce coolant temperature for about 1 K as compared without activating them. The result alsoreveals that when the reactor is being operated at half of its rated power, it is still in safe condition for a partial loss of heat sink accident without scram.

  15. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoidmore » overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety system components such as the safety relief valve (SRV), the RCIC system, the wet well, and the dry well. The results show reasonable system behaviors while exhibiting rich dynamics such as variable flow rates through RCIC turbine and pump during the SBO transient. The model has the potential to resolve the Fukushima RCIC mystery after adding the off-design two-phase turbine operation model and other additional improvements.« less

  16. Targeting NF-κB RelA/p65 phosphorylation overcomes RITA resistance.

    PubMed

    Bu, Yiwen; Cai, Guoshuai; Shen, Yi; Huang, Chenfei; Zeng, Xi; Cao, Yu; Cai, Chuan; Wang, Yuhong; Huang, Dan; Liao, Duan-Fang; Cao, Deliang

    2016-12-28

    Inactivation of p53 occurs frequently in various cancers. RITA is a promising anticancer small molecule that dissociates p53-MDM2 interaction, reactivates p53 and induces exclusive apoptosis in cancer cells, but acquired RITA resistance remains a major drawback. This study found that the site-differential phosphorylation of nuclear factor-κB (NF-κB) RelA/p65 creates a barcode for RITA chemosensitivity in cancer cells. In naïve MCF7 and HCT116 cells where RITA triggered vast apoptosis, phosphorylation of RelA/p65 increased at Ser536, but decreased at Ser276 and Ser468; oppositely, in RITA-resistant cells, RelA/p65 phosphorylation decreased at Ser536, but increased at Ser276 and Ser468. A phosphomimetic mutation at Ser536 (p65/S536D) or silencing of endogenous RelA/p65 resensitized the RITA-resistant cells to RITA while the phosphomimetic mutant at Ser276 (p65/S276D) led to RITA resistance of naïve cells. In mouse xenografts, intratumoral delivery of the phosphomimetic p65/S536D mutant increased the antitumor activity of RITA. Furthermore, in the RITA-resistant cells ATP-binding cassette transporter ABCC6 was upregulated, and silencing of ABCC6 expression in these cells restored RITA sensitivity. In the naïve cells, ABCC6 delivery led to RITA resistance and blockage of p65/S536D mutant-induced RITA sensitivity. Taken together, these data suggest that the site-differential phosphorylation of RelA/p65 modulates RITA sensitivity in cancer cells, which may provide an avenue to manipulate RITA resistance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Models for the Configuration and Integrity of Partially Oxidized Fuel Rod Cladding at High Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siefken, L.J.

    1999-01-01

    Models were designed to resolve deficiencies in the SCDAP/RELAP5/MOD3.2 calculations of the configuration and integrity of hot, partially oxidized cladding. These models are expected to improve the calculations of several important aspects of fuel rod behavior. First, an improved mapping was established from a compilation of PIE results from severe fuel damage tests of the configuration of melted metallic cladding that is retained by an oxide layer. The improved mapping accounts for the relocation of melted cladding in the circumferential direction. Then, rules based on PIE results were established for calculating the effect of cladding that has relocated from abovemore » on the oxidation and integrity of the lower intact cladding upon which it solidifies. Next, three different methods were identified for calculating the extent of dissolution of the oxidic part of the cladding due to its contact with the metallic part. The extent of dissolution effects the stress and thus the integrity of the oxidic part of the cladding. Then, an empirical equation was presented for calculating the stress in the oxidic part of the cladding and evaluating its integrity based on this calculated stress. This empirical equation replaces the current criterion for loss of integrity which is based on temperature and extent of oxidation. Finally, a new rule based on theoretical and experimental results was established for identifying the regions of a fuel rod with oxidation of both the inside and outside surfaces of the cladding. The implementation of these models is expected to eliminate the tendency of the SCDAP/RELAP5 code to overpredict the extent of oxidation of the upper part of fuel rods and to underpredict the extent of oxidation of the lower part of fuel rods and the part with a high concentration of relocated material. This report is a revision and reissue of the report entitled, Improvements in Modeling of Cladding Oxidation and Meltdown.« less

  18. Multiloop integral system test (MIST): Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gloudemans, J.R.

    1991-04-01

    The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in 11 volumes. Volumes 2 through 8 pertain to groups of Phase 3 tests by type; Volume 9 presents inter-group comparisons; Volume 10 provides comparisons between the RELAP5/MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include, Test Advisory Group (TAG) issues, facility scaling and design, test matrix, observations, comparison of RELAP5 calculations to MIST observations, and MIST versus the TAG issues. MIST generated consistent integral-system data covering a wide range of transient interactions. MIST provided insight into integral system behavior and assisted the code effort. The MIST observations addressed each of the TAG issues. 11 refs., 29 figs., 9 tabs.« less

  19. RAVEN: a GUI and an Artificial Intelligence Engine in a Dynamic PRA Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. Rabiti; D. Mandelli; A. Alfonsi

    Increases in computational power and pressure for more accurate simulations and estimations of accident scenario consequences are driving the need for Dynamic Probabilistic Risk Assessment (PRA) [1] of very complex models. While more sophisticated algorithms and computational power address the back end of this challenge, the front end is still handled by engineers that need to extract meaningful information from the large amount of data and build these complex models. Compounding this problem is the difficulty in knowledge transfer and retention, and the increasing speed of software development. The above-described issues would have negatively impacted deployment of the new highmore » fidelity plant simulator RELAP-7 (Reactor Excursion and Leak Analysis Program) at Idaho National Laboratory. Therefore, RAVEN that was initially focused to be the plant controller for RELAP-7 will help mitigate future RELAP-7 software engineering risks. In order to accomplish this task, Reactor Analysis and Virtual Control Environment (RAVEN) has been designed to provide an easy to use Graphical User Interface (GUI) for building plant models and to leverage artificial intelligence algorithms in order to reduce computational time, improve results, and help the user to identify the behavioral pattern of the Nuclear Power Plants (NPPs). In this paper we will present the GUI implementation and its current capability status. We will also introduce the support vector machine algorithms and show our evaluation of their potentiality in increasing the accuracy and reducing the computational costs of PRA analysis. In this evaluation we will refer to preliminary studies performed under the Risk Informed Safety Margins Characterization (RISMC) project of the Light Water Reactors Sustainability (LWRS) campaign [3]. RISMC simulation needs and algorithm testing are currently used as a guidance to prioritize RAVEN developments relevant to PRA.« less

  20. Design of an Experimental Facility for Passive Heat Removal in Advanced Nuclear Reactors

    NASA Astrophysics Data System (ADS)

    Bersano, Andrea

    With reference to innovative heat exchangers to be used in passive safety system of Gen- eration IV nuclear reactors and Small Modular Reactors it is necessary to study the natural circulation and the efficiency of heat removal systems. Especially in safety systems, as the decay heat removal system of many reactors, it is increasing the use of passive components in order to improve their availability and reliability during possible accidental scenarios, reducing the need of human intervention. Many of these systems are based on natural circulation, so they require an intense analysis due to the possible instability of the related phenomena. The aim of this thesis work is to build a scaled facility which can reproduce, in a simplified way, the decay heat removal system (DHR2) of the lead-cooled fast reactor ALFRED and, in particular, the bayonet heat exchanger, which transfers heat from lead to water. Given the thermal power to be removed, the natural circulation flow rate and the pressure drops will be studied both experimentally and numerically using the code RELAP5 3D. The first phase of preliminary analysis and project includes: the calculations to design the heat source and heat sink, the choice of materials and components and CAD drawings of the facility. After that, the numerical study is performed using the thermal-hydraulic code RELAP5 3D in order to simulate the behavior of the system. The purpose is to run pretest simulations of the facility to optimize the dimensioning setting the operative parameters (temperature, pressure, etc.) and to chose the most adequate measurement devices. The model of the system is continually developed to better simulate the system studied. High attention is dedicated to the control logic of the system to obtain acceptable results. The initial experimental tests phase consists in cold zero power tests of the facility in order to characterize and to calibrate the pressure drops. In future works the experimental results will be compared to the values predicted by the system code and differences will be discussed with the ultimate goal to qualify RELAP5-3D for the analysis of decay heat removal systems in natural circulation. The numerical data will be also used to understand the key parameters related to the heat transfer in natural circulation and to optimize the operation of the system.

  1. Pretest mediction of Semiscale Test S-07-10 B. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobbe, C A

    A best estimate prediction of Semiscale Test S-07-10B was performed at INEL by EG and G Idaho as part of the RELAP4/MOD6 code assessment effort and as the Nuclear Regulatory Commission pretest calculation for the Small Break Experiment. The RELAP4/MOD6 Update 4 and the RELAP4/MOD7 computer codes were used to analyze Semiscale Test S-07-10B, a 10% communicative cold leg break experiment. The Semiscale Mod-3 system utilized an electrially heated simulated core operating at a power level of 1.94 MW. The initial system pressure and temperature in the upper plenum was 2276 psia and 604/sup 0/F, respectively.

  2. Thermal-hydraulic modeling needs for passive reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, J.M.

    1997-07-01

    The U.S. Nuclear Regulatory Commission has received an application for design certification from the Westinghouse Electric Corporation for an Advanced Light Water Reactor design known as the AP600. As part of the design certification process, the USNRC uses its thermal-hydraulic system analysis codes to independently audit the vendor calculations. The focus of this effort has been the small break LOCA transients that rely upon the passive safety features of the design to depressurize the primary system sufficiently so that gravity driven injection can provide a stable source for long term cooling. Of course, large break LOCAs have also been considered,more » but as the involved phenomena do not appear to be appreciably different from those of current plants, they were not discussed in this paper. Although the SBLOCA scenario does not appear to threaten core coolability - indeed, heatup is not even expected to occur - there have been concerns as to the performance of the passive safety systems. For example, the passive systems drive flows with small heads, consequently requiring more precision in the analysis compared to active systems methods for passive plants as compared to current plants with active systems. For the analysis of SBLOCAs and operating transients, the USNRC uses the RELAP5 thermal-hydraulic system analysis code. To assure the applicability of RELAP5 to the analysis of these transients for the AP600 design, a four year long program of code development and assessment has been undertaken.« less

  3. Uncertainty quantification for accident management using ACE surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varuttamaseni, A.; Lee, J. C.; Youngblood, R. W.

    The alternating conditional expectation (ACE) regression method is used to generate RELAP5 surrogates which are then used to determine the distribution of the peak clad temperature (PCT) during the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed (F and B) operation in the Zion-1 nuclear power plant. The construction of the surrogates assumes conditional independence relations among key reactor parameters. The choice of parameters to model is based on the macroscopic balance statements governing the behavior of the reactor. The peak clad temperature is calculated based on the independent variables that are known tomore » be important in determining the success of the F and B operation. The relationship between these independent variables and the plant parameters such as coolant pressure and temperature is represented by surrogates that are constructed based on 45 RELAP5 cases. The time-dependent PCT for different values of F and B parameters is calculated by sampling the independent variables from their probability distributions and propagating the information through two layers of surrogates. The results of our analysis show that the ACE surrogates are able to satisfactorily reproduce the behavior of the plant parameters even though a quasi-static assumption is primarily used in their construction. The PCT is found to be lower in cases where the F and B operation is initiated, compared to the case without F and B, regardless of the F and B parameters used. (authors)« less

  4. PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard

    2016-03-01

    The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less

  5. Status of thermalhydraulic modelling and assessment: Open issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.

    1997-07-01

    This paper presents the status of the physical modelling in present codes used for Nuclear Reactor Thermalhydraulics (TRAC, RELAP 5, CATHARE, ATHLET,...) and attempts to list the unresolved or partially resolved issues. First, the capabilities and limitations of present codes are presented. They are mainly known from a synthesis of the assessment calculations performed for both separate effect tests and integral effect tests. It is also interesting to list all the assumptions and simplifications which were made in the establishment of the system of equations and of the constitutive relations. Many of the present limitations are associated to physical situationsmore » where these assumptions are not valid. Then, recommendations are proposed to extend the capabilities of these codes.« less

  6. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  7. Current and anticipated uses of thermal-hydraulic codes in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  8. Intestinal alkaline phosphatase and sodium butyrate may be beneficial in attenuating LPS-induced intestinal inflammation.

    PubMed

    Melo, A D B; Silveira, H; Bortoluzzi, C; Lara, L J; Garbossa, C A P; Preis, G; Costa, L B; Rostagno, M H

    2016-10-17

    In this study, we evaluated the effect of intestinal alkaline phosphatase (IAP) and sodium butyrate (NaBu) on lipopolysaccharide (LPS)-induced intestinal inflammation. Intestinal alkaline phosphatase and RelA/p65 (NF-κB) gene expressions in porcine jejunum explants were evaluated following exposure to sodium butyrate (NaBu) and essential oil from Brazilian red pepper (EO), alone or in combination with NaBu, as well as exogenous IAP with or without LPS challenge. Five piglets weighing approximately 20 kg each were sacrificed, and their jejunum were extracted. The tissues were segmented into 10 parts, which were exposed to 10 treatments. Gene expressions of IAP and RelA/p65 (NF-κB) in jejunal explants were evaluated via RT-PCR. We found that EO, NaBu, and exogenous IAP were able to up-regulate endogenous IAP and enhance RelA/p65 (NF-κB) gene expression. However, only NaBu and exogenous IAP down-regulated LPS-induced inflammatory response via RelA/p65 (NF-κB). In conclusion, we demonstrated that exogenous IAP and NaBu may be beneficial in attenuating LPS-induced intestinal inflammation.

  9. Assessment of RELAP5/MOD2 against a pressurizer spray valve inadverted fully opening transient and recovery by natural circulation in Jose Cabrera Nuclear Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arroyo, R.; Rebollo, L.

    1993-06-01

    This document presents the comparison between the simulation results and the plant measurements of a real event that took place in JOSE CABRERA nuclear power plant in August 30th, 1984. The event was originated by the total, continuous and inadverted opening of the pressurizer spray valve PCV-400A. JOSE CABRERA power plant is a single loop Westinghouse PWR belonging to UNION ELECTRICA FENOSA, S.A. (UNION FENOSA), an Spanish utility which participates in the International Code Assessment and Applications Program (ICAP) as a member of UNIDAD ELECTRICA, S.A. (UNESA). This is the second of its two contributions to the Program: the firstmore » one was an application case and this is an assessment one. The simulation has been performed using the RELAP5/MOD2 cycle 36.04 code, running on a CDC CYBER 180/830 computer under NOS 2.5 operating system. The main phenomena have been calculated correctly and some conclusions about the 3D characteristics of the condensation due to the spray and its simulation with a 1D tool have been got.« less

  10. BWR station blackout: A RISMC analysis using RAVEN and RELAP5-3D

    DOE PAGES

    Mandelli, D.; Smith, C.; Riley, T.; ...

    2016-01-01

    The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates and improved operations. In order to evaluate the impact of these factors on the safety of the plant, the Risk-Informed Safety Margin Characterization (RISMC) project aims to provide insights to decision makers through a series of simulations of the plant dynamics for different initial conditions and accident scenarios. This paper presents a case study in order to show the capabilities of the RISMC methodology to assess impact of power uprate of a Boiling Watermore » Reactor system during a Station Black-Out accident scenario. We employ a system simulator code, RELAP5-3D, coupled with RAVEN which perform the stochastic analysis. Furthermore, our analysis is performed by: 1) sampling values from a set of parameters from the uncertainty space of interest, 2) simulating the system behavior for that specific set of parameter values and 3) analyzing the outcomes from the set of simulation runs.« less

  11. RELAP5 Analyses of OECD/NEA ROSA-2 Project Experiments on Intermediate-Break LOCAs at Hot Leg or Cold Leg

    NASA Astrophysics Data System (ADS)

    Takeda, Takeshi; Maruyama, Yu; Watanabe, Tadashi; Nakamura, Hideo

    Experiments simulating PWR intermediate-break loss-of-coolant accidents (IBLOCAs) with 17% break at hot leg or cold leg were conducted in OECD/NEA ROSA-2 Project using the Large Scale Test Facility (LSTF). In the hot leg IBLOCA test, core uncovery started simultaneously with liquid level drop in crossover leg downflow-side before loop seal clearing (LSC) induced by steam condensation on accumulator coolant injected into cold leg. Water remained on upper core plate in upper plenum due to counter-current flow limiting (CCFL) because of significant upward steam flow from the core. In the cold leg IBLOCA test, core dryout took place due to rapid liquid level drop in the core before LSC. Liquid was accumulated in upper plenum, steam generator (SG) U-tube upflow-side and SG inlet plenum before the LSC due to CCFL by high velocity vapor flow, causing enhanced decrease in the core liquid level. The RELAP5/MOD3.2.1.2 post-test analyses of the two LSTF experiments were performed employing critical flow model in the code with a discharge coefficient of 1.0. In the hot leg IBLOCA case, cladding surface temperature of simulated fuel rods was underpredicted due to overprediction of core liquid level after the core uncovery. In the cold leg IBLOCA case, the cladding surface temperature was underpredicted too due to later core uncovery than in the experiment. These may suggest that the code has remaining problems in proper prediction of primary coolant distribution.

  12. Importins α and β signaling mediates endothelial cell inflammation and barrier disruption.

    PubMed

    Leonard, Antony; Rahman, Arshad; Fazal, Fabeha

    2018-04-01

    Nucleocytoplasmic shuttling via importins is central to the function of eukaryotic cells and an integral part of the processes that lead to many human diseases. In this study, we addressed the role of α and β importins in the mechanism of endothelial cell (EC) inflammation and permeability, important pathogenic features of many inflammatory diseases such as acute lung injury and atherosclerosis. RNAi-mediated knockdown of importin α4 or α3 each inhibited NF-κB activation, proinflammatory gene (ICAM-1, VCAM-1, and IL-6) expression, and thereby endothelial adhesivity towards HL-60 cells, upon thrombin challenge. The inhibitory effect of α4 and α3 knockdown was associated with impaired nuclear import and consequently, DNA binding of RelA/p65 subunit of NF-κB and occurred independently of IκBα degradation. Intriguingly, knockdown of importins α4 and α3 also inhibited thrombin-induced RelA/p65 phosphorylation at Ser 536 , showing a novel role of α importins in regulating transcriptional activity of RelA/p65. Similarly, knockdown of importin β1, but not β2, blocked thrombin-induced activation of RelA/p65 and its target genes. In parallel studies, TNFα-mediated inflammatory responses in EC were refractory to knockdown of importins α4, α3 or β1, indicating a stimulus-specific regulation of RelA/p65 and EC inflammation by these importins. Importantly, α4, α3, or β1 knockdown also protected against thrombin-induced EC barrier disruption by inhibiting the loss of VE-cadherin at adherens junctions and by regulating actin cytoskeletal rearrangement. These results identify α4, α3 and β1 as critical mediators of EC inflammation and permeability associated with intravascular coagulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Regulation of Endothelial Cell Inflammation and Lung PMN Infiltration by Transglutaminase 2

    PubMed Central

    Bijli, Kaiser M.; Kanter, Bryce G.; Minhajuddin, Mohammad; Leonard, Antony; Xu, Lei; Fazal, Fabeha; Rahman, Arshad

    2014-01-01

    We addressed the role of transglutaminase2 (TG2), a calcium-dependent enzyme that catalyzes crosslinking of proteins, in the mechanism of endothelial cell (EC) inflammation and lung PMN infiltration. Exposure of EC to thrombin, a procoagulant and proinflammatory mediator, resulted in activation of the transcription factor NF-κB and its target genes, VCAM-1, MCP-1, and IL-6. RNAi knockdown of TG2 inhibited these responses. Analysis of NF-κB activation pathway showed that TG2 knockdown was associated with inhibition of thrombin-induced DNA binding as well as serine phosphorylation of RelA/p65, a crucial event that controls transcriptional capacity of the DNA-bound RelA/p65. These results implicate an important role for TG2 in mediating EC inflammation by promoting DNA binding and transcriptional activity of RelA/p65. Because thrombin is released in high amounts during sepsis and its concentration is elevated in plasma and lavage fluids of patients with Acute Respiratory Distress Syndrome (ARDS), we determined the in vivo relevance of TG2 in a mouse model of sepsis-induced lung PMN recruitment. A marked reduction in NF-κB activation, adhesion molecule expression, and lung PMN sequestration was observed in TG2 knockout mice compared to wild type mice exposed to endotoxemia. Together, these results identify TG2 as an important mediator of EC inflammation and lung PMN sequestration associated with intravascular coagulation and sepsis. PMID:25057925

  14. RelAp43, a member of the NF-κB family involved in innate immune response against Lyssavirus infection.

    PubMed

    Luco, Sophie; Delmas, Olivier; Vidalain, Pierre-Olivier; Tangy, Frédéric; Weil, Robert; Bourhy, Hervé

    2012-01-01

    NF-κB transcription factors are crucial for many cellular processes. NF-κB is activated by viral infections to induce expression of antiviral cytokines. Here, we identified a novel member of the human NF-κB family, denoted RelAp43, the nucleotide sequence of which contains several exons as well as an intron of the RelA gene. RelAp43 is expressed in all cell lines and tissues tested and exhibits all the properties of a NF-κB protein. Although its sequence does not include a transactivation domain, identifying it as a class I member of the NF-κB family, it is able to potentiate RelA-mediated transactivation and stabilize dimers comprising p50. Furthermore, RelAp43 stimulates the expression of HIAP1, IRF1, and IFN-β - three genes involved in cell immunity against viral infection. It is also targeted by the matrix protein of lyssaviruses, the agents of rabies, resulting in an inhibition of the NF-κB pathway. Taken together, our data provide the description of a novel functional member of the NF-κB family, which plays a key role in the induction of anti-viral innate immune response.

  15. RelAp43, a Member of the NF-κB Family Involved in Innate Immune Response against Lyssavirus Infection

    PubMed Central

    Vidalain, Pierre-Olivier; Tangy, Frédéric; Weil, Robert; Bourhy, Hervé

    2012-01-01

    NF-κB transcription factors are crucial for many cellular processes. NF-κB is activated by viral infections to induce expression of antiviral cytokines. Here, we identified a novel member of the human NF-κB family, denoted RelAp43, the nucleotide sequence of which contains several exons as well as an intron of the RelA gene. RelAp43 is expressed in all cell lines and tissues tested and exhibits all the properties of a NF-κB protein. Although its sequence does not include a transactivation domain, identifying it as a class I member of the NF-κB family, it is able to potentiate RelA-mediated transactivation and stabilize dimers comprising p50. Furthermore, RelAp43 stimulates the expression of HIAP1, IRF1, and IFN-β - three genes involved in cell immunity against viral infection. It is also targeted by the matrix protein of lyssaviruses, the agents of rabies, resulting in an inhibition of the NF-κB pathway. Taken together, our data provide the description of a novel functional member of the NF-κB family, which plays a key role in the induction of anti-viral innate immune response. PMID:23271966

  16. Final Report on ITER Task Agreement 81-08

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard L. Moore

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less

  17. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  18. A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit

    DOE PAGES

    Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...

    2015-05-17

    In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Colby B.; Folsom, Charles P.; Davis, Cliff B.

    Experimental testing in the Multi-Static Environment Rodlet Transient Test Apparatus (SERTTA) will lead the rebirth of transient fuel testing in the United States as part of the Accident Tolerant Fuels (ATF) progam. The Multi-SERTTA is comprised of four isolated pressurized environments capable of a wide variety of working fluids and thermal conditions. Ultimately, the TREAT reactor as well as the Multi-SERTTA test vehicle serve the purpose of providing desired thermal-hydraulic boundary conditions to the test specimen. The initial ATF testing in TREAT will focus on reactivity insertion accident (RIA) events using both gas and water environments including typical PWR operatingmore » pressures and temperatures. For the water test environment, a test configuration is envisioned using the expansion tank as part of the gas-filled expansion volume seen by the test to provide additional pressure relief. The heat transfer conditions during the high energy power pulses of RIA events remains a subject of large uncertainty and great importance for fuel performance predictions. To support transient experiments, the Multi-SERTTA vehicle has been modeled using RELAP5 with a baseline test specimen composed of UO2 fuel in zircaloy cladding. The modeling results show the influence of the designs of the specimen, vehicle, and transient power pulses. The primary purpose of this work is to provide input and boundary conditions to fuel performance code BISON. Therefore, studies of parameters having influence on specimen performance during RIA transients are presented including cladding oxidation, power pulse magnitude and width, cladding-to-coolant heat fluxes, fuel-to-cladding gap, transient boiling effects (modified CHF values), etc. The results show the great flexibility and capacity of the TREAT Multi-SERTTA test vehicle to provide testing under a wide range of prototypic thermal-hydraulic conditions as never done before.« less

  20. An approach to model reactor core nodalization for deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd

    2016-01-01

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  1. An approach to model reactor core nodalization for deterministic safety analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less

  2. RELAP5/MOD2 analysis of a postulated cold leg SBLOCA'' simultaneous to a total black-out'' event in the Jose Cabrera Nuclear Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rebollo, L.

    1992-04-01

    Several beyond-design bases cold leg small-break LOCA postulated scenarios based on the lessons learned'' in the OECD-LOFT LP-SB-3 experiment have been analyzed for the Westinghouse single loop Jose Cabrera Nuclear Power Plant belonging to the Spanish utility UNION ELECTRICA FENOSA, S.A. The analysis has been done by the utility in the Thermal-Hydraulic Accident Analysis Section of the Engineering Department of the Nuclear Division. The RELAP5/MOD2/36.04 code has been used on a CYBER 180/830 computer and the simulation includes the 6 in. RHRS charging line, the 2 in. pressurizer spray, and the 1.5 in. CVCS make-up line piping breaks. The assumptionmore » of a total black-out condition'' coincident with the occurrence of the event has been made in order to consider a plant degraded condition with total active failure of the ECCS. As a result of the analysis, estimates of the time to core overheating startup'' as well as an evaluation of alternate operator measures to mitigate the consequences of the event have been obtained. Finally a proposal for improving the LOCA emergency operating procedure (E-1) has been suggested.« less

  3. RELAP5/MOD2 analysis of a postulated ``cold leg SBLOCA`` simultaneous to a ``total black-out`` event in the Jose Cabrera Nuclear Station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rebollo, L.

    1992-04-01

    Several beyond-design bases cold leg small-break LOCA postulated scenarios based on the ``lessons learned`` in the OECD-LOFT LP-SB-3 experiment have been analyzed for the Westinghouse single loop Jose Cabrera Nuclear Power Plant belonging to the Spanish utility UNION ELECTRICA FENOSA, S.A. The analysis has been done by the utility in the Thermal-Hydraulic & Accident Analysis Section of the Engineering Department of the Nuclear Division. The RELAP5/MOD2/36.04 code has been used on a CYBER 180/830 computer and the simulation includes the 6 in. RHRS charging line, the 2 in. pressurizer spray, and the 1.5 in. CVCS make-up line piping breaks. Themore » assumption of a ``total black-out condition`` coincident with the occurrence of the event has been made in order to consider a plant degraded condition with total active failure of the ECCS. As a result of the analysis, estimates of the ``time to core overheating startup`` as well as an evaluation of alternate operator measures to mitigate the consequences of the event have been obtained. Finally a proposal for improving the LOCA emergency operating procedure (E-1) has been suggested.« less

  4. Verification of RELAP5-3D code in natural circulation loop as function of the initial water inventory

    NASA Astrophysics Data System (ADS)

    Bertani, C.; Falcone, N.; Bersano, A.; Caramello, M.; Matsushita, T.; De Salve, M.; Panella, B.

    2017-11-01

    High safety and reliability of advanced nuclear reactors, Generation IV and Small Modular Reactors (SMR), have a crucial role in the acceptance of these new plants design. Among all the possible safety systems, particular efforts are dedicated to the study of passive systems because they rely on simple physical principles like natural circulation, without the need of external energy source to operate. Taking inspiration from the second Decay Heat Removal system (DHR2) of ALFRED, the European Generation IV demonstrator of the fast lead cooled reactor, an experimental facility has been built at the Energy Department of Politecnico di Torino (PROPHET facility) to study single and two-phase flow natural circulation. The facility behavior is simulated using the thermal-hydraulic system code RELAP5-3D, which is widely used in nuclear applications. In this paper, the effect of the initial water inventory on natural circulation is analyzed. The experimental time behaviors of temperatures and pressures are analyzed. The experimental matrix ranges between 69 % and 93%; the influence of the opposite effects related to the increase of the volume available for the expansion and the pressure raise due to phase change is discussed. Simulations of the experimental tests are carried out by using a 1D model at constant heat power and fixed liquid and air mass; the code predictions are compared with experimental results. Two typical responses are observed: subcooled or two phase saturated circulation. The steady state pressure is a strong function of liquid and air mass inventory. The numerical results show that, at low initial liquid mass inventory, the natural circulation is not stable but pulsated.

  5. Establishment and assessment of code scaling capability

    NASA Astrophysics Data System (ADS)

    Lim, Jaehyok

    In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, William R.; Lee, John C.; baxter, Alan

    Information and measured data from the intial Fort St. Vrain (FSV) high temperature gas reactor core is used to develop a benchmark configuration to validate computational methods for analysis of a full-core, commercial HTR configuration. Large uncertainties in the geometry and composition data for the FSV fuel and core are identified, including: (1) the relative numbers of fuel particles for the four particle types, (2) the distribution of fuel kernel diameters for the four particle types, (3) the Th:U ratio in the initial FSV core, (4) and the buffer thickness for the fissile and fertile particles. Sensitivity studies were performedmore » to assess each of these uncertainties. A number of methods were developed to assist in these studies, including: (1) the automation of MCNP5 input files for FSV using Python scripts, (2) a simple method to verify isotopic loadings in MCNP5 input files, (3) an automated procedure to conduct a coupled MCNP5-RELAP5 analysis for a full-core FSV configuration with thermal-hydraulic feedback, and (4) a methodology for sampling kernel diameters from arbitrary power law and Gaussian PDFs that preserved fuel loading and packing factor constraints. A reference FSV fuel configuration was developed based on having a single diameter kernel for each of the four particle types, preserving known uranium and thorium loadings and packing factor (58%). Three fuel models were developed, based on representing the fuel as a mixture of kernels with two diameters, four diameters, or a continuous range of diameters. The fuel particles were put into a fuel compact using either a lattice-bsed approach or a stochastic packing methodology from RPI, and simulated with MCNP5. The results of the sensitivity studies indicated that the uncertainties in the relative numbers and sizes of fissile and fertile kernels were not important nor were the distributions of kernel diameters within their diameter ranges. The uncertainty in the Th:U ratio in the intial FSV core was found to be important with a crude study. The uncertainty in the TRISO buffer thickness was estimated to be unimportant but the study was not conclusive. FSV fuel compacts and a regular FSV fuel element were analyzed with MCNP5 and compared with predictions using a modified version of HELIOS that is capable of analyzing TRISO fuel configurations. The HELIOS analyses were performed by SSP. The eigenvalue discrepancies between HELIOS and MCNP5 are currently on the order of 1% but these are still being evaluated. Full-core FSV configurations were developed for two initial critical configurations - a cold, clean critical loading and a critical configuration at 70% power. MCNP5 predictions are compared to experimental data and the results are mixed. Analyses were also done for the pulsed neutron experiments that were conducted by GA for the initial FSV core. MCNP5 was used to model these experiments and reasonable agreement with measured results has been observed.« less

  7. Melatonin reverses H2 O2 -induced senescence in SH-SY5Y cells by enhancing autophagy via sirtuin 1 deacetylation of the RelA/p65 subunit of NF-κB.

    PubMed

    Nopparat, Chutikorn; Sinjanakhom, Puritat; Govitrapong, Piyarat

    2017-08-01

    Autophagy, a degradation mechanism that plays a major role in maintaining cellular homeostasis and diminishes in aging, is considered an aging characteristic. Melatonin is an important hormone that plays a wide range of physiological functions, including the anti-aging effect, potentially via the regulation of the Sirtuin1 (SIRT1) pathway. The deacetylation ability of SIRT1 is important for controlling the function of several transcription factors, including nuclear factor kappa B (NF-ĸB). Apart from inflammation, NF-ĸB can regulate autophagy by inhibiting Beclin1, an initiator of autophagy. Although numerous studies have revealed the role of melatonin in regulating autophagy, very limited experiments have shown that melatonin can increase autophagic activity via SIRT1 in a senescent model. This study focuses on the effect of melatonin on autophagy via the deacetylation activity of SIRT1 on RelA/p65, a subunit of NF-ĸB, to determine whether melatonin can attenuate the aging condition. SH-SY5Y cells were treated with H 2 O 2 to induce the senescent state. These results demonstrated that melatonin reduced a number of beta-galactosidase (SA-βgal)-positive cells, a senescent marker. In addition, melatonin increased the protein levels of SIRT1, Beclin1, and LC3-II, a hallmark protein of autophagy, and reduced the levels of acetylated-Lys310 in the p65 subunit of NF-ĸB in SH-SY5Y cells treated with H 2 O 2 . Furthermore, in the presence of SIRT1 inhibitor, melatonin failed to increase autophagic markers. The present data indicate that melatonin enhances autophagic activity via the SIRT1 signaling pathway. Taken together, we propose that in modulating autophagy, melatonin may provide a therapeutically beneficial role in the anti-aging processes. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gleicher, Frederick; Ortensi, Javier; DeHart, Mark

    Accurate calculation of desired quantities to predict fuel behavior requires the solution of interlinked equations representing different physics. Traditional fuels performance codes often rely on internal empirical models for the pin power density and a simplified boundary condition on the cladding edge. These simplifications are performed because of the difficulty of coupling applications or codes on differing domains and mapping the required data. To demonstrate an approach closer to first principles, the neutronics application Rattlesnake and the thermal hydraulics application RELAP-7 were coupled to the fuels performance application BISON under the master application MAMMOTH. A single fuel pin was modeledmore » based on the dimensions of a Westinghouse 17x17 fuel rod. The simulation consisted of a depletion period of 1343 days, roughly equal to three full operating cycles, followed by a station blackout (SBO) event. The fuel rod was depleted for 1343 days for a near constant total power loading of 65.81 kW. After 1343 days the fission power was reduced to zero (simulating a reactor shut-down). Decay heat calculations provided the time-varying energy source after this time. For this problem, Rattlesnake, BISON, and RELAP-7 are coupled under MAMMOTH in a split operator approach. Each system solves its physics on a separate mesh and, for RELAP-7 and BISON, on only a subset of the full problem domain. Rattlesnake solves the neutronics over the whole domain that includes the fuel, cladding, gaps, water, and top and bottom rod holders. Here BISON is applied to the fuel and cladding with a 2D axi-symmetric domain, and RELAP-7 is applied to the flow of the circular outer water channel with a set of 1D flow equations. The mesh on the Rattlesnake side can either be 3D (for low order transport) or 2D (for diffusion). BISON has a matching ring structure mesh for the fuel so both the power density and local burn up are copied accurately from Rattlesnake. At each depletion time step, Rattlesnake calculates a power density, fission density rate, burn-up distribution and fast flux based on the current water density and fuel temperature. These are then mapped to the BISON mesh for a fuels performance solve. BISON calculates the fuel temperature and cladding surface temperature based upon the current power density and bulk fluid temperature. RELAP-7 then calculates the fluid temperature, water density fraction and water phase velocity based upon the cladding surface temperature. The fuel temperature and the fluid density are then passed back to Rattlesnake for another neutronics calculation. Six Picard or fixed-point style iterations are preformed in this manner to obtain consistent tightly coupled and stable results. For this paper a set of results from the detailed calculation are provided for both during depletion and the SBO event. We demonstrate that a detailed calculation closer to first principles can be done under MAMMOTH between different applications on differing domains.« less

  9. Chamomile, an anti-inflammatory agent inhibits inducible nitric oxide synthase expression by blocking RelA/p65 activity

    PubMed Central

    Bhaskaran, Natarajan; Shukla, Sanjeev; Srivastava, Janmejai K; Gupta, Sanjay

    2010-01-01

    Chamomile has long been used in traditional medicine for the treatment of inflammation-related disorders. In this study we aimed to investigate the inhibitory effects of chamomile on nitric oxide (NO) production and inducible nitric oxide synthase (iNOS) expression, and to explore its potential anti-inflammatory mechanisms using RAW 264.7 macrophages. Chamomile treatment inhibited LPS-induced NO production and significantly blocked IL-1β , IL-6 and TNFα-induced NO levels in RAW 264.7 macrophages. Chamomile caused reduction in LPS-induced iNOS mRNA and protein expression. In RAW 264.7 macrophages, LPS-induced DNA binding activity of RelA/p65 was significantly inhibited by chamomile, an effect that was mediated through the inhibition of IKKβ , the upstream kinase regulating NF-κ B/Rel activity, and degradation of inhibitory factor-κ B. These results demonstrate that chamomile inhibits NO production and iNOS gene expression by inhibiting RelA/p65 activation and supports the utilization of chamomile as an effective anti-inflammatory agent. PMID:21042790

  10. Chamomile: an anti-inflammatory agent inhibits inducible nitric oxide synthase expression by blocking RelA/p65 activity.

    PubMed

    Bhaskaran, Natarajan; Shukla, Sanjeev; Srivastava, Janmejai K; Gupta, Sanjay

    2010-12-01

    Chamomile has long been used in traditional medicine for the treatment of inflammation-related disorders. In this study we investigated the inhibitory effects of chamomile on nitric oxide (NO) production and inducible nitric oxide synthase (iNOS) expression, and explored its potential anti-inflammatory mechanisms using RAW 264.7 macrophages. Chamomile treatment inhibited LPS-induced NO production and significantly blocked IL-1β, IL-6 and TNFα-induced NO levels in RAW 264.7 macrophages. Chamomile caused reduction in LPS-induced iNOS mRNA and protein expression. In RAW 264.7 macrophages, LPS-induced DNA binding activity of RelA/p65 was significantly inhibited by chamomile, an effect that was mediated through the inhibition of IKKβ, the upstream kinase regulating NF-κB/Rel activity, and degradation of inhibitory factor-κB. These results demonstrate that chamomile inhibits NO production and iNOS gene expression by inhibiting RelA/p65 activation and supports the utilization of chamomile as an effective anti-inflammatory agent.

  11. Thrombin selectively engages LIM kinase 1 and slingshot-1L phosphatase to regulate NF-κB activation and endothelial cell inflammation

    PubMed Central

    Leonard, Antony; Marando, Catherine; Rahman, Arshad

    2013-01-01

    Endothelial cell (EC) inflammation is a central event in the pathogenesis of many pulmonary diseases such as acute lung injury and its more severe form acute respiratory distress syndrome. Alterations in actin cytoskeleton are shown to be crucial for NF-κB regulation and EC inflammation. Previously, we have described a role of actin binding protein cofilin in mediating cytoskeletal alterations essential for NF-κB activation and EC inflammation. The present study describes a dynamic mechanism in which LIM kinase 1 (LIMK1), a cofilin kinase, and slingshot-1Long (SSH-1L), a cofilin phosphatase, are engaged by procoagulant and proinflammatory mediator thrombin to regulate these responses. Our data show that knockdown of LIMK1 destabilizes whereas knockdown of SSH-1L stabilizes the actin filaments through modulation of cofilin phosphorylation; however, in either case thrombin-induced NF-κB activity and expression of its target genes (ICAM-1 and VCAM-1) is inhibited. Further mechanistic analyses reveal that knockdown of LIMK1 or SSH-1L each attenuates nuclear translocation and thereby DNA binding of RelA/p65. In addition, LIMK1 or SSH-1L depletion inhibited RelA/p65 phosphorylation at Ser536, a critical event conferring transcriptional competency to the bound NF-κB. However, unlike SSH-1L, LIMK1 knockdown also impairs the release of RelA/p65 by blocking IKKβ-dependent phosphorylation/degradation of IκBα. Interestingly, LIMK1 or SSH-1L depletion failed to inhibit TNF-α-induced RelA/p65 nuclear translocation and proinflammatory gene expression. Thus this study provides evidence for a novel role of LIMK1 and SSH-1L in selectively regulating EC inflammation associated with intravascular coagulation. PMID:24039253

  12. Thrombin selectively engages LIM kinase 1 and slingshot-1L phosphatase to regulate NF-κB activation and endothelial cell inflammation.

    PubMed

    Leonard, Antony; Marando, Catherine; Rahman, Arshad; Fazal, Fabeha

    2013-11-01

    Endothelial cell (EC) inflammation is a central event in the pathogenesis of many pulmonary diseases such as acute lung injury and its more severe form acute respiratory distress syndrome. Alterations in actin cytoskeleton are shown to be crucial for NF-κB regulation and EC inflammation. Previously, we have described a role of actin binding protein cofilin in mediating cytoskeletal alterations essential for NF-κB activation and EC inflammation. The present study describes a dynamic mechanism in which LIM kinase 1 (LIMK1), a cofilin kinase, and slingshot-1Long (SSH-1L), a cofilin phosphatase, are engaged by procoagulant and proinflammatory mediator thrombin to regulate these responses. Our data show that knockdown of LIMK1 destabilizes whereas knockdown of SSH-1L stabilizes the actin filaments through modulation of cofilin phosphorylation; however, in either case thrombin-induced NF-κB activity and expression of its target genes (ICAM-1 and VCAM-1) is inhibited. Further mechanistic analyses reveal that knockdown of LIMK1 or SSH-1L each attenuates nuclear translocation and thereby DNA binding of RelA/p65. In addition, LIMK1 or SSH-1L depletion inhibited RelA/p65 phosphorylation at Ser(536), a critical event conferring transcriptional competency to the bound NF-κB. However, unlike SSH-1L, LIMK1 knockdown also impairs the release of RelA/p65 by blocking IKKβ-dependent phosphorylation/degradation of IκBα. Interestingly, LIMK1 or SSH-1L depletion failed to inhibit TNF-α-induced RelA/p65 nuclear translocation and proinflammatory gene expression. Thus this study provides evidence for a novel role of LIMK1 and SSH-1L in selectively regulating EC inflammation associated with intravascular coagulation.

  13. RELAP5 Analysis of the Hybrid Loop-Pool Design for Sodium Cooled Fast Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hongbin Zhang; Haihua Zhao; Cliff Davis

    2008-06-01

    An innovative hybrid loop-pool design for sodium cooled fast reactors (SFR-Hybrid) has been recently proposed. This design takes advantage of the inherent safety of a pool design and the compactness of a loop design to improve economics and safety of SFRs. In the hybrid loop-pool design, primary loops are formed by connecting the reactor outlet plenum (hot pool), intermediate heat exchangers (IHX), primary pumps and the reactor inlet plenum with pipes. The primary loops are immersed in the cold pool (buffer pool). Passive safety systems -- modular Pool Reactor Auxiliary Cooling Systems (PRACS) – are added to transfer decay heatmore » from the primary system to the buffer pool during loss of forced circulation (LOFC) transients. The primary systems and the buffer pool are thermally coupled by the PRACS, which is composed of PRACS heat exchangers (PHX), fluidic diodes and connecting pipes. Fluidic diodes are simple, passive devices that provide large flow resistance in one direction and small flow resistance in reverse direction. Direct reactor auxiliary cooling system (DRACS) heat exchangers (DHX) are immersed in the cold pool to transfer decay heat to the environment by natural circulation. To prove the design concepts, especially how the passive safety systems behave during transients such as LOFC with scram, a RELAP5-3D model for the hybrid loop-pool design was developed. The simulations were done for both steady-state and transient conditions. This paper presents the details of RELAP5-3D analysis as well as the calculated thermal response during LOFC with scram. The 250 MW thermal power conventional pool type design of GNEP’s Advanced Burner Test Reactor (ABTR) developed by Argonne National Laboratory was used as the reference reactor core and primary loop design. The reactor inlet temperature is 355 °C and the outlet temperature is 510 °C. The core design is the same as that for ABTR. The steady state buffer pool temperature is the same as the reactor inlet temperature. The peak cladding, hot pool, cold pool and reactor inlet temperatures were calculated during LOFC. The results indicate that there are two phases during LOFC transient – the initial thermal equilibration phase and the long term decay heat removal phase. The initial thermal equilibration phase occurs over a few hundred seconds, as the system adjusts from forced circulation to natural circulation flow. Subsequently, during long-term heat removal phase all temperatures evolve very slowly due to the large thermal inertia of the primary and buffer pool systems. The results clearly show that passive safety PRACS can effectively transfer decay heat from the primary system to the buffer pool by natural circulation. The DRACS system in turn can effectively transfer the decay heat to the environment.« less

  14. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    This document outlines the development of a high fidelity, best estimate nuclear power plant severe transient simulation capability that will complement or enhance the integral system codes historically used for licensing and analysis of severe accidents. As with other tools in the Risk Informed Safety Margin Characterization (RISMC) Toolkit, the ultimate user of Enhanced Severe Transient Analysis and Prevention (ESTAP) capability is the plant decision-maker; the deliverable to that customer is a modern, simulation-based safety analysis capability, applicable to a much broader class of safety issues than is traditional Light Water Reactor (LWR) licensing analysis. Currently, the RISMC pathway’s majormore » emphasis is placed on developing RELAP-7, a next-generation safety analysis code, and on showing how to use RELAP-7 to analyze margin from a modern point of view: that is, by characterizing margin in terms of the probabilistic spectra of the “loads” applied to systems, structures, and components (SSCs), and the “capacity” of those SSCs to resist those loads without failing. The first objective of the ESTAP task, and the focus of one task of this effort, is to augment RELAP-7 analyses with user-selected multi-dimensional, multi-phase models of specific plant components to simulate complex phenomena that may lead to, or exacerbate, severe transients and core damage. Such phenomena include: coolant crossflow between PWR assemblies during a severe reactivity transient, stratified single or two-phase coolant flow in primary coolant piping, inhomogeneous mixing of emergency coolant water or boric acid with hot primary coolant, and water hammer. These are well-documented phenomena associated with plant transients but that are generally not captured in system codes. They are, however, generally limited to specific components, structures, and operating conditions. The second ESTAP task is to similarly augment a severe (post-core damage) accident integral analyses code with high fidelity simulations that would allow investigation of multi-dimensional, multi-phase containment phenomena that are only treated approximately in established codes.« less

  16. Thermal-hydraulic simulation of natural convection decay heat removal in the High Flux Isotope Reactor using RELAP5 and TEMPEST: Part 1, Models and simulation results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, D.G.; Wendel, M.W.; Chen, N.C.J.

    A study was conducted to examine decay heat removal requirements in the High Flux Isotope Reactor (HFIR) following shutdown from 85 MW. The objective of the study was to determine when forced flow through the core could be terminated without causing the fuel to melt. This question is particularly relevant when a station blackout caused by an external event is considered. Analysis of natural circulation in the core, vessel upper plenum, and reactor pool indicates that 12 h of forced flow will permit a safe shutdown with some margin. However, uncertainties in the analysis preclude conclusive proof that 12 hmore » is sufficient. As a result of the study, two seismically qualified diesel generators were installed in HFIR. 9 refs., 4 figs.« less

  17. Essential Role of Cofilin-1 in Regulating Thrombin-induced RelA/p65 Nuclear Translocation and Intercellular Adhesion Molecule 1 (ICAM-1) Expression in Endothelial Cells*

    PubMed Central

    Fazal, Fabeha; Bijli, Kaiser M.; Minhajuddin, Mohd; Rein, Theo; Finkelstein, Jacob N.; Rahman, Arshad

    2009-01-01

    Activation of RhoA/Rho-associated kinase (ROCK) pathway and the associated changes in actin cytoskeleton induced by thrombin are crucial for activation of NF-κB and expression of its target gene ICAM-1 in endothelial cells. However, the events acting downstream of RhoA/ROCK to mediate these responses remain unclear. Here, we show a central role of cofilin-1, an actin-binding protein that promotes actin depolymerization, in linking RhoA/ROCK pathway to dynamic alterations in actin cytoskeleton that are necessary for activation of NF-κB and thereby expression of ICAM-1 in these cells. Stimulation of human umbilical vein endothelial cells with thrombin resulted in Ser3 phosphorylation/inactivation of cofilin and formation of actin stress fibers in a ROCK-dependent manner. RNA interference knockdown of cofilin-1 stabilized the actin filaments and inhibited thrombin- and RhoA-induced NF-κB activity. Similarly, constitutively inactive mutant of cofilin-1 (Cof1-S3D), known to stabilize the actin cytoskeleton, inhibited NF-κB activity by thrombin. Overexpression of wild type cofilin-1 or constitutively active cofilin-1 mutant (Cof1-S3A), known to destabilize the actin cytoskeleton, also impaired thrombin-induced NF-κB activity. Additionally, depletion of cofilin-1 was associated with a marked reduction in ICAM-1 expression induced by thrombin. The effect of cofilin-1 depletion on NF-κB activity and ICAM-1 expression occurred downstream of IκBα degradation and was a result of impaired RelA/p65 nuclear translocation and consequently, RelA/p65 binding to DNA. Together, these data show that cofilin-1 occupies a central position in RhoA-actin pathway mediating nuclear translocation of RelA/p65 and expression of ICAM-1 in endothelial cells. PMID:19483084

  18. Essential role of cofilin-1 in regulating thrombin-induced RelA/p65 nuclear translocation and intercellular adhesion molecule 1 (ICAM-1) expression in endothelial cells.

    PubMed

    Fazal, Fabeha; Bijli, Kaiser M; Minhajuddin, Mohd; Rein, Theo; Finkelstein, Jacob N; Rahman, Arshad

    2009-07-31

    Activation of RhoA/Rho-associated kinase (ROCK) pathway and the associated changes in actin cytoskeleton induced by thrombin are crucial for activation of NF-kappaB and expression of its target gene ICAM-1 in endothelial cells. However, the events acting downstream of RhoA/ROCK to mediate these responses remain unclear. Here, we show a central role of cofilin-1, an actin-binding protein that promotes actin depolymerization, in linking RhoA/ROCK pathway to dynamic alterations in actin cytoskeleton that are necessary for activation of NF-kappaB and thereby expression of ICAM-1 in these cells. Stimulation of human umbilical vein endothelial cells with thrombin resulted in Ser(3) phosphorylation/inactivation of cofilin and formation of actin stress fibers in a ROCK-dependent manner. RNA interference knockdown of cofilin-1 stabilized the actin filaments and inhibited thrombin- and RhoA-induced NF-kappaB activity. Similarly, constitutively inactive mutant of cofilin-1 (Cof1-S3D), known to stabilize the actin cytoskeleton, inhibited NF-kappaB activity by thrombin. Overexpression of wild type cofilin-1 or constitutively active cofilin-1 mutant (Cof1-S3A), known to destabilize the actin cytoskeleton, also impaired thrombin-induced NF-kappaB activity. Additionally, depletion of cofilin-1 was associated with a marked reduction in ICAM-1 expression induced by thrombin. The effect of cofilin-1 depletion on NF-kappaB activity and ICAM-1 expression occurred downstream of IkappaBalpha degradation and was a result of impaired RelA/p65 nuclear translocation and consequently, RelA/p65 binding to DNA. Together, these data show that cofilin-1 occupies a central position in RhoA-actin pathway mediating nuclear translocation of RelA/p65 and expression of ICAM-1 in endothelial cells.

  19. ISP33 standard problem on the PACTEL facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purhonen, H.; Kouhia, J.; Kalli, H.

    ISP33 is the first OECD/NEA/CSNI standard problem related to VVER type of pressurized water reactors. The reference reactor of the PACTEL test facility, which was used to carry out the ISP33 experiment, is the VVER-440 reactor, two of which are located near the Finnish city of Loviisa. The objective of the ISP33 test was to study the natural circulation behaviour of VVER-440 reactors at different coolant inventories. Natural circulation was considered as a suitable phenomenon to focus on by the first VVER related ISP due to its importance in most accidents and transients. The behaviour of the natural circulation wasmore » expected to be different compared to Western type of PWRs as a result of the effect of horizontal steam generators and the hot leg loop seals. This ISP was conducted as a blind problem. The experiment was started at full coolant inventory. Single-phase natural circulation transported the energy from the core to the steam generators. The inventory was then reduced stepwise at about 900 s intervals draining 60 kg each time from the bottom of the downcomer. the core power was about 3.7% of the nominal value. The test was terminated after the cladding temperatures began to rise. ATHLET, CATHARE, RELAP5 (MODs 3, 2.5 and 2), RELAP4/MOD6, DINAMIKA and TECH-M4 codes were used in 21 pre- and 20 posttest calculations submitted for the ISP33.« less

  20. Investigation of two-phase phenomena occurring within moisture separator reheater high-level reactor trips at the Maanshan nuclear power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferng, Y.M.; Liao, L.Y.

    1996-01-01

    During the operating history of the Maanshan nuclear power plant (MNPP), five reactor trips have occurred as a result of the moisture separator reheater (MSR) high-level signal. These MSR high-level reactor trips have been a very serious concern, especially during the startup period of MNPP. Consequently, studying the physical phenomena of this particular event is worthwhile, and analytical work is performed using the RELAP5/MOD3 code to investigate the thermal-hydraulic phenomena of two-phase behaviors occurring within the MSR high-level reactor trips. The analytical model is first assessed against the experimental data obtained from several test loops. The same model can thenmore » be applied with confidence to the study of this topic. According to the present calculated results, the phenomena of liquid droplet accumulation ad residual liquid blowing in the horizontal section of cross-under-lines can be modeled. In addition, the present model can also predict the different increasing rates of inlet steam flow rate affecting the liquid accumulation within the cross-under-lines. The calculated conclusion is confirmed by the revised startup procedure of MNPP.« less

  1. Enterovirus 71 2C Protein Inhibits NF-κB Activation by Binding to RelA(p65)

    PubMed Central

    Du, Haiwei; Yin, Peiqi; Yang, Xiaojie; Zhang, Leiliang; Jin, Qi; Zhu, Guofeng

    2015-01-01

    Viruses evolve multiple ways to interfere with NF-κB signaling, a key regulator of innate and adaptive immunity. Enterovirus 71 (EV71) is one of primary pathogens that cause hand-foot-mouth disease. Here, we identify RelA(p65) as a novel binding partner for EV71 2C protein from yeast two-hybrid screen. By interaction with IPT domain of p65, 2C reduces the formation of heterodimer p65/p50, the predominant form of NF-κB. We also show that picornavirus 2C family proteins inhibit NF-κB activation and associate with p65 and IKKβ. Our findings provide a novel mechanism how EV71 antagonizes innate immunity. PMID:26394554

  2. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Parmar, Kulwinder Singh

    2016-03-01

    This study investigates the accuracy of least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5Tree) in modeling river water pollution. Various combinations of water quality parameters, Free Ammonia (AMM), Total Kjeldahl Nitrogen (TKN), Water Temperature (WT), Total Coliform (TC), Fecal Coliform (FC) and Potential of Hydrogen (pH) monitored at Nizamuddin, Delhi Yamuna River in India were used as inputs to the applied models. Results indicated that the LSSVM and MARS models had almost same accuracy and they performed better than the M5Tree model in modeling monthly chemical oxygen demand (COD). The average root mean square error (RMSE) of the LSSVM and M5Tree models was decreased by 1.47% and 19.1% using MARS model, respectively. Adding TC input to the models did not increase their accuracy in modeling COD while adding FC and pH inputs to the models generally decreased the accuracy. The overall results indicated that the MARS and LSSVM models could be successfully used in estimating monthly river water pollution level by using AMM, TKN and WT parameters as inputs.

  3. Improvement of COBRA-TF for modeling of PWR cold- and hot-legs during reactor transients

    NASA Astrophysics Data System (ADS)

    Salko, Robert K.

    COBRA-TF is a two-phase, three-field (liquid, vapor, droplets) thermal-hydraulic modeling tool that has been developed by the Pacific Northwest Laboratory under sponsorship of the NRC. The code was developed for Light Water Reactor analysis starting in the 1980s; however, its development has continued to this current time. COBRA-TF still finds wide-spread use throughout the nuclear engineering field, including nuclear-power vendors, academia, and research institutions. It has been proposed that extension of the COBRA-TF code-modeling region from vessel-only components to Pressurized Water Reactor (PWR) coolant-line regions can lead to improved Loss-of-Coolant Accident (LOCA) analysis. Improved modeling is anticipated due to COBRA-TF's capability to independently model the entrained-droplet flow-field behavior, which has been observed to impact delivery to the core region[1]. Because COBRA-TF was originally developed for vertically-dominated, in-vessel, sub-channel flow, extension of the COBRA-TF modeling region to the horizontal-pipe geometries of the coolant-lines required several code modifications, including: • Inclusion of the stratified flow regime into the COBRA-TF flow regime map, along with associated interfacial drag, wall drag and interfacial heat transfer correlations, • Inclusion of a horizontal-stratification force between adjacent mesh cells having unequal levels of stratified flow, and • Generation of a new code-input interface for the modeling of coolant-lines. The sheer number of COBRA-TF modifications that were required to complete this work turned this project into a code-development project as much as it was a study of thermal-hydraulics in reactor coolant-lines. The means for achieving these tasks shifted along the way, ultimately leading the development of a separate, nearly completely independent one-dimensional, two-phase-flow modeling code geared toward reactor coolant-line analysis. This developed code has been named CLAP, for Coolant-Line-Analysis Package. Versions were created that were both coupled to COBRA-TF and standalone, with the most recent version being a standalone code. This code performs a separate, simplified, 1-D solution of the conservation equations while making special considerations for coolant-line geometry and flow phenomena. The end of this project saw a functional code package that demonstrates a stable numerical solution and that has gone through a series of Validation and Verification tests using the Two-Phase Testing Facility (TPTF) experimental data[2]. The results indicate that CLAP is under-performing RELAP5-MOD3 in predicting the experimental void of the TPTF facility in some cases. There is no apparent pattern, however, to point to a consistent type of case that the code fails to predict properly (e.g., low-flow, high-flow, discharging to full vessel, or discharging to empty vessel). Pressure-profile predictions are sometimes unrealistic, which indicates that there may be a problem with test-case boundary conditions or with the coupling of continuity and momentum equations in the solution algorithm. The code does predict the flow regime correctly for all cases with the stratification-force model off. Turning the stratification model on can cause the low-flow case void profiles to over-react to the force and the flow regime to transition out of stratified flow. The code would benefit from an increased amount of Validation & Verification testing. The development of CLAP was significant, as it is a cleanly written, logical representation of the reactor coolant-line geometry. It is stable and capable of modeling basic flow physics in the reactor coolant-line. Code development and debugging required the temporary removal of the energy equation and mass-transfer terms in governing equations. The reintroduction of these terms will allow future coupling to RELAP and re-coupling with COBRA-TF. Adding in more applicable entrainment and de-entrainment models would allow the capture of more advanced physics in the coolant-line that can be expected during Loss-of-Coolant Accident. One of the package's benefits is its ability to be used as a platform for future coolant-line model development and implementation, including capturing of the important de-entrainment behavior in reactor hot-legs (steam-binding effect) and flow convection in the upper-plenum region of the vessel.

  4. Validation and quantification of [18F]altanserin binding in the rat brain using blood input and reference tissue modeling

    PubMed Central

    Riss, Patrick J; Hong, Young T; Williamson, David; Caprioli, Daniele; Sitnikov, Sergey; Ferrari, Valentina; Sawiak, Steve J; Baron, Jean-Claude; Dalley, Jeffrey W; Fryer, Tim D; Aigbirhio, Franklin I

    2011-01-01

    The 5-hydroxytryptamine type 2a (5-HT2A) selective radiotracer [18F]altanserin has been subjected to a quantitative micro-positron emission tomography study in Lister Hooded rats. Metabolite-corrected plasma input modeling was compared with reference tissue modeling using the cerebellum as reference tissue. [18F]altanserin showed sufficient brain uptake in a distribution pattern consistent with the known distribution of 5-HT2A receptors. Full binding saturation and displacement was documented, and no significant uptake of radioactive metabolites was detected in the brain. Blood input as well as reference tissue models were equally appropriate to describe the radiotracer kinetics. [18F]altanserin is suitable for quantification of 5-HT2A receptor availability in rats. PMID:21750562

  5. LOFT L2-3 blowdown experiment safety analyses D, E, and G; LOCA analyses H, K, K1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perryman, J.L.; Keeler, C.D.; Saukkoriipi, L.O.

    1978-12-01

    Three calculations using conservative off-nominal conditions and evaluation model options were made using RELAP4/MOD5 for blowdown-refill and RELAP4/MOD6 for reflood for Loss-of-Fluid Test Experiment L2-3 to support the experiment safety analysis effort. The three analyses are as follows: Analysis D: Loss of commercial power during Experiment L2-3; Analysis E: Hot leg quick-opening blowdown valve (QOBV) does not open during Experiment L2-3; and Analysis G: Cold leg QOBV does not open during Experiment L2-3. In addition, the results of three LOFT loss-of-coolant accident (LOCA) analyses using a power of 56.1 MW and a primary coolant system flow rate of 3.6 millionmore » 1bm/hr are presented: Analysis H: Intact loop 200% hot leg break; emergency core cooling (ECC) system B unavailable; Analysis K: Pressurizer relief valve stuck in open position; ECC system B unavailable; and Analysis K1: Same as analysis K, but using a primary coolant system flow rate of 1.92 million 1bm/hr (L2-4 pre-LOCE flow rate). For analysis D, the maximum cladding temperature reached was 1762/sup 0/F, 22 sec into reflood. In analyses E and G, the blowdowns were slower due to one of the QOBVs not functioning. The maximum cladding temperature reached in analysis E was 1700/sup 0/F, 64.7 sec into reflood; for analysis G, it was 1300/sup 0/F at the start of reflood. For analysis H, the maximum cladding temperature reached was 1825/sup 0/F, 0.01 sec into reflood. Analysis K was a very slow blowdown, and the cladding temperatures followed the saturation temperature of the system. The results of analysis K1 was nearly identical to analysis K; system depressurization was not affected by the primary coolant system flow rate.« less

  6. Mitogen- and Stress-Activated Kinase 1 (MSK1) Regulates Cigarette Smoke-Induced Histone Modifications on NF-κB-dependent Genes

    PubMed Central

    Sundar, Isaac K.; Chung, Sangwoon; Hwang, Jae-woong; Lapek, John D.; Bulger, Michael; Friedman, Alan E.; Yao, Hongwei; Davie, James R.; Rahman, Irfan

    2012-01-01

    Cigarette smoke (CS) causes sustained lung inflammation, which is an important event in the pathogenesis of chronic obstructive pulmonary disease (COPD). We have previously reported that IKKα (I kappaB kinase alpha) plays a key role in CS-induced pro-inflammatory gene transcription by chromatin modifications; however, the underlying role of downstream signaling kinase is not known. Mitogen- and stress-activated kinase 1 (MSK1) serves as a specific downstream NF-κB RelA/p65 kinase, mediating transcriptional activation of NF-κB-dependent pro-inflammatory genes. The role of MSK1 in nuclear signaling and chromatin modifications is not known, particularly in response to environmental stimuli. We hypothesized that MSK1 regulates chromatin modifications of pro-inflammatory gene promoters in response to CS. Here, we report that CS extract activates MSK1 in human lung epithelial (H292 and BEAS-2B) cell lines, human primary small airway epithelial cells (SAEC), and in mouse lung, resulting in phosphorylation of nuclear MSK1 (Thr581), phospho-acetylation of RelA/p65 at Ser276 and Lys310 respectively. This event was associated with phospho-acetylation of histone H3 (Ser10/Lys9) and acetylation of histone H4 (Lys12). MSK1 N- and C-terminal kinase-dead mutants, MSK1 siRNA-mediated knock-down in transiently transfected H292 cells, and MSK1 stable knock-down mouse embryonic fibroblasts significantly reduced CS extract-induced MSK1, NF-κB RelA/p65 activation, and posttranslational modifications of histones. CS extract/CS promotes the direct interaction of MSK1 with RelA/p65 and p300 in epithelial cells and in mouse lung. Furthermore, CS-mediated recruitment of MSK1 and its substrates to the promoters of NF-κB-dependent pro-inflammatory genes leads to transcriptional activation, as determined by chromatin immunoprecipitation. Thus, MSK1 is an important downstream kinase involved in CS-induced NF-κB activation and chromatin modifications, which have implications in pathogenesis of COPD. PMID:22312446

  7. Neonatal High Bone Mass With First Mutation of the NF-κB Complex: Heterozygous De Novo Missense (p.Asp512Ser) RELA (Rela/p65).

    PubMed

    Frederiksen, Anja L; Larsen, Martin J; Brusgaard, Klaus; Novack, Deborah V; Knudsen, Peter Juel Thiis; Schrøder, Henrik Daa; Qiu, Weimin; Eckhardt, Christina; McAlister, William H; Kassem, Moustapha; Mumm, Steven; Frost, Morten; Whyte, Michael P

    2016-01-01

    Heritable disorders that feature high bone mass (HBM) are rare. The etiology is typically a mutation(s) within a gene that regulates the differentiation and function of osteoblasts (OBs) or osteoclasts (OCs). Nevertheless, the molecular basis is unknown for approximately one-fifth of such entities. NF-κB signaling is a key regulator of bone remodeling and acts by enhancing OC survival while impairing OB maturation and function. The NF-κB transcription complex comprises five subunits. In mice, deletion of the p50 and p52 subunits together causes osteopetrosis (OPT). In humans, however, mutations within the genes that encode the NF-κB complex, including the Rela/p65 subunit, have not been reported. We describe a neonate who died suddenly and unexpectedly and was found at postmortem to have HBM documented radiographically and by skeletal histopathology. Serum was not available for study. Radiographic changes resembled malignant OPT, but histopathological investigation showed morphologically normal OCs and evidence of intact bone resorption excluding OPT. Furthermore, mutation analysis was negative for eight genes associated with OPT or HBM. Instead, accelerated bone formation appeared to account for the HBM. Subsequently, trio-based whole exome sequencing revealed a heterozygous de novo missense mutation (c.1534_1535delinsAG, p.Asp512Ser) in exon 11 of RELA encoding Rela/p65. The mutation was then verified using bidirectional Sanger sequencing. Lipopolysaccharide stimulation of patient fibroblasts elicited impaired NF-κB responses compared with healthy control fibroblasts. Five unrelated patients with unexplained HBM did not show a RELA defect. Ours is apparently the first report of a mutation within the NF-κB complex in humans. The missense change is associated with neonatal osteosclerosis from in utero increased OB function rather than failed OC action. These findings demonstrate the importance of the Rela/p65 subunit within the NF-κB pathway for human skeletal homeostasis and represent a new genetic cause of HBM. © 2015 American Society for Bone and Mineral Research.

  8. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  9. Resveratrol reduces senescence-associated secretory phenotype by SIRT1/NF-κB pathway in gut of the annual fish Nothobranchius guentheri.

    PubMed

    Liu, Shan; Zheng, Zhaodi; Ji, Shuhua; Liu, Tingting; Hou, Yanhan; Li, Shasha; Li, Guorong

    2018-06-13

    Senescent cells display a senescence-associated secretory phenotype (SASP), which contributes to aging. Resveratrol, an activator of SIRT1, has anti-aging, anti-inflammatory, anti-oxidant, anti-free radical and other pharmacological effects. The genus of the annual fish Nothobranchius has become an emerging animal model for studying aging. However, the underlying mechanism for resveratrol to delay aging by SASP regulation has not been elucidated in vertebrates. In this study, the annual fish N. guentheri were fed with resveratrol for long-term treatment. The results showed that resveratrol reversed intensive senescence-associated β-galactosidase activity with aging process, down-regulated levels of SASP-associated proinflammatory cytokines IL-8 and TNFα, and up-regulated expression of anti-inflammatory cytokine IL-10 in gut of the fish. Resveratrol increased SIRT1 expression, and inhibited NF-κB by decreasing RelA/p65, Ac-RelA/p65 and p-IκBα levels and by increasing the interaction between SIRT1 and RelA/p65. Moreover, resveratrol reversed the decline of intestinal epithelial cells (IECs) and intestinal stem cells (ISCs) caused by aging in gut of the fish. Together, our results implied that resveratrol inhibited SASP through SIRT1/NF-κB signaling pathway and delayed aging of the annual fish N. guentheri. Copyright © 2018. Published by Elsevier Ltd.

  10. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  11. MOOSE IPL Extensions (Control Logic)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Permann, Cody

    In FY-2015, the development of MOOSE was driven by the needs of the NEAMS MOOSE-based applications, BISON, MARMOT, and RELAP-7. An emphasis was placed on the continued upkeep and improvement MOOSE in support of the product line integration goals. New unified documentation tools have been developed, several improvements to regression testing have been enforced and overall better software quality practices have been implemented. In addition the Multiapps and Transfers systems have seen significant refactoring and robustness improvements, as has the “Restart and Recover” system in support of Multiapp simulations. Finally, a completely new “Control Logic” system has been engineered tomore » replace the prototype system currently in use in the RELAP-7 code. The development of this system continues and is expected to handle existing needs as well as support future enhancements.« less

  12. I-NERI Quarterly Technical Report (April 1 to June 30, 2005)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang Oh; Prof. Hee Cheon NO; Prof. John Lee

    2005-06-01

    The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and reaction rates for the C/CO2 reaction were measured. Then, the rates of C/CO2 reaction were compared to the ones of C/O2 reaction. The rate equation for C/CO2 has been developed. (D) INL added models to RELAP5/ATHENA to cacilate the chemical reactions in a VHTR during an air ingress accident. Limited testing of the models indicate that they are calculating a correct special distribution in gas compositions. (E) INL benchmarked NACOK natural circulation data. (F) Professor Lee et al at the University of Michigan (UM) Task 5. The funding was received from the DOE Richland Office at the end of May and the subcontract paperwork was delivered to the UM on the sixth of June. The objective of this task is to develop a state of the art neutronics model for determining power distributions and decay heat deposition rates in a VHTGR core. Our effort during the reporting period covered reactor physics analysis of coated particles and coupled nuclear-thermal-hydraulic (TH) calculations, together with initial calculations for decay heat deposition rates in the core.« less

  13. Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.

    PubMed

    Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H

    2000-06-01

    Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.

  14. User’s Manual for SEEK TALK Full Scale Engineering Development Life Cycle Cost (LCC) Model. Volume II. Model Equations and Model Operations.

    DTIC Science & Technology

    1981-04-01

    LIFE CYCLE COST (LCC) LCC SENSITIVITY ANALYSIS LCC MODE , REPAIR LEVEL ANALYSIS (RLA) 20 ABSTRACT (Cnn tlnue on reverse side It necessary and Identify... level analysis capability. Next it provides values for Air Force input parameters and instructions for contractor inputs, general operating...Maintenance Manhour Requirements 39 5.1.4 Calculation of Repair Level Fractions 43 5.2 Cost Element Equations 47 5.2.1 Production Cost Element 47

  15. Pretest analysis document for Test S-FS-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.

    This report documents the pretest calculations completed for Semiscale Test S-FS-7. This test will simulate a transient initiated by a 14.3% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The results of a RELAP5/MOD2/CY21 code calculation indicate that the test objectives for Test S-FS-7 can be achieved. The primary system overpressurization will occur but pose no threat to personnel ormore » to plant integrity. 3 refs., 15 figs., 5 tabs.« less

  16. Pretest analysis document for Test S-FS-11

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.; Shaw, R.A.

    This report documents the pretest calculations completed for Semiscale Test S-FS-11. This test will simulate a transient initiated by a 50% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represents normal operating conditions for a C-E System 80 nuclear plant. Prediction of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The results of a RELAP5/MOD2/CY21 code calculation indicate that the test objectives for Test S-FS-11 can be achieved. The primary system overpressurization will occur but pose no threat to personnel or plantmore » integrity. 3 refs., 15 figs., 5 tabs.« less

  17. MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE

    PubMed Central

    Cao, Ye; Frey, H. Christopher

    2012-01-01

    Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732

  18. ANALYSIS OF BORON DILUTION TRANSIENTS IN PWRS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DIAMOND,D.J.BROMLEY,B.P.ARONSON,A.L.

    2004-02-04

    A study has been carried out with PARCS/RELAP5 to understand the consequences of hypothetical boron dilution events in pressurized water reactors. The scenarios of concern start with a small-break loss-of-coolant accident. If the event leads to boiling in the core and then the loss of natural circulation, a boron-free condensate can accumulate in the cold leg. The dilution event happens when natural circulation is re-established or a reactor coolant pump (RCP) is restarted in violation of operating procedures. This event is of particular concern in B&W reactors with a lowered-loop design and is a Generic Safety Issue for the U.S.more » Nuclear Regulatory Commission. The results of calculations with the reestablishment of natural circulation show that there is no unacceptable fuel damage. This is determined by calculating the maximum fuel pellet enthalpy, based on the three-dimensional model, and comparing it with the criterion for damage. The calculation is based on a model of a B&W reactor at beginning of the fuel cycle. If an RCP is restarted, unacceptable fuel damage may be possible in plants with sufficiently large volumes of boron-free condensate in the cold leg.« less

  19. Glutamate-Bound NMDARs Arising from In Vivo-like Network Activity Extend Spatio-temporal Integration in a L5 Cortical Pyramidal Cell Model

    PubMed Central

    Farinella, Matteo; Ruedt, Daniel T.; Gleeson, Padraig; Lanore, Frederic; Silver, R. Angus

    2014-01-01

    In vivo, cortical pyramidal cells are bombarded by asynchronous synaptic input arising from ongoing network activity. However, little is known about how such ‘background’ synaptic input interacts with nonlinear dendritic mechanisms. We have modified an existing model of a layer 5 (L5) pyramidal cell to explore how dendritic integration in the apical dendritic tuft could be altered by the levels of network activity observed in vivo. Here we show that asynchronous background excitatory input increases neuronal gain and extends both temporal and spatial integration of stimulus-evoked synaptic input onto the dendritic tuft. Addition of fast and slow inhibitory synaptic conductances, with properties similar to those from dendritic targeting interneurons, that provided a ‘balanced’ background configuration, partially counteracted these effects, suggesting that inhibition can tune spatio-temporal integration in the tuft. Excitatory background input lowered the threshold for NMDA receptor-mediated dendritic spikes, extended their duration and increased the probability of additional regenerative events occurring in neighbouring branches. These effects were also observed in a passive model where all the non-synaptic voltage-gated conductances were removed. Our results show that glutamate-bound NMDA receptors arising from ongoing network activity can provide a powerful spatially distributed nonlinear dendritic conductance. This may enable L5 pyramidal cells to change their integrative properties as a function of local network activity, potentially allowing both clustered and spatially distributed synaptic inputs to be integrated over extended timescales. PMID:24763087

  20. Risk-Informed External Hazards Analysis for Seismic and Flooding Phenomena for a Generic PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parisi, Carlo; Prescott, Steve; Ma, Zhegang

    This report describes the activities performed during the FY2017 for the US-DOE Light Water Reactor Sustainability Risk-Informed Safety Margin Characterization (LWRS-RISMC), Industry Application #2. The scope of Industry Application #2 is to deliver a risk-informed external hazards safety analysis for a representative nuclear power plant. Following the advancements occurred during the previous FYs (toolkits identification, models development), FY2017 focused on: increasing the level of realism of the analysis; improving the tools and the coupling methodologies. In particular the following objectives were achieved: calculation of buildings pounding and their effects on components seismic fragility; development of a SAPHIRE code PRA modelsmore » for 3-loops Westinghouse PWR; set-up of a methodology for performing static-dynamic PRA coupling between SAPHIRE and EMRALD codes; coupling RELAP5-3D/RAVEN for performing Best-Estimate Plus Uncertainty analysis and automatic limit surface search; and execute sample calculations for demonstrating the capabilities of the toolkit in performing a risk-informed external hazards safety analyses.« less

  1. Input selection and performance optimization of ANN-based streamflow forecasts in the drought-prone Murray Darling Basin region using IIS and MODWT algorithm

    NASA Astrophysics Data System (ADS)

    Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek

    2017-11-01

    Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.

  2. Model-free adaptive control of supercritical circulating fluidized-bed boilers

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L

    2014-12-16

    A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  3. Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program

    NASA Technical Reports Server (NTRS)

    Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.

    1981-01-01

    The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.

  4. Solid rocket booster thermal radiation model. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Lee, A. L.

    1976-01-01

    A user's manual was prepared for the computer program of a solid rocket booster (SRB) thermal radiation model. The following information was included: (1) structure of the program, (2) input information required, (3) examples of input cards and output printout, (4) program characteristics, and (5) program listing.

  5. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  6. Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number

    NASA Technical Reports Server (NTRS)

    Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-01-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  7. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    DOE PAGES

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...

    2016-05-16

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less

  8. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    NASA Astrophysics Data System (ADS)

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-05-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  9. Technical support to the Nuclear Regulatory Commission for the boiling water reactor blowdown heat transfer program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rice, R.E.

    Results are presented of studies conducted by Aerojet Nuclear Company (ANC) in FY 1975 to support the Nuclear Regulatory Commission (NRC) on the boiling water reactor blowdown heat transfer (BWR-BDHT) program. The support provided by ANC is that of an independent assessor of the program to ensure that the data obtained are adequate for verification of analytical models used for predicting reactor response to a postulated loss-of-coolant accident. The support included reviews of program plans, objectives, measurements, and actual data. Additional activity included analysis of experimental system performance and evaluation of the RELAP4 computer code as applied to the experiments.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, A.; Garner, P.; Hanan, N.

    Thermal-hydraulic simulations have been performed using computational fluid dynamics (CFD) for the highly-enriched uranium (HEU) design of the IVG.1M reactor at the Institute of Atomic Energy (IAE) at the National Nuclear Center (NNC) in the Republic of Kazakhstan. Steady-state simulations were performed for both types of fuel assembly (FA), i.e. the FA in rows 1 & 2 and the FA in row 3, as well as for single pins in those FA (600 mm and 800 mm pins). Both single pin calculations and bundle sectors have been simulated for the most conservative operating conditions corresponding to the 10 MW outputmore » power, which corresponds to a pin unit cell Reynolds number of only about 7500. Simulations were performed using the commercial code STAR-CCM+ for the actual twisted pin geometry as well as a straight-pin approximation. Various Reynolds-Averaged Navier-Stokes (RANS) turbulence models gave different results, and so some validation runs with a higher-fidelity Large Eddy Simulation (LES) code were performed given the lack of experimental data. These singled out the Realizable Two-Layer k-ε as the most accurate turbulence model for estimating surface temperature. Single-pin results for the twisted case, based on the average flow rate per pin and peak pin power, were conservative for peak clad surface temperature compared to the bundle results. Also the straight-pin calculations were conservative as compared to the twisted pin simulations, as expected, but the single-pin straight case was not always conservative with regard to the straight-pin bundle. This was due to the straight-pin temperature distribution being strongly influenced by the pin orientation, particularly near the outer boundary. The straight-pin case also predicted the peak temperature to be in a different location than the twisted-pin case. This is a limitation of the straight-pin approach. The peak temperature pin was in a different location from the peak power pin in every case simulated, and occurred at an inner pin just before the enrichment change. The 600 mm case demonstrated a peak clad surface temperature of 370.4 K, while the 800 mm case had a temperature of 391.6 K. These temperatures are well below the necessary temperatures for boiling to occur at the rated pressure. Fuel temperatures are also well below the melting point. Future bundle work will include simulations of the proposed low-enriched uranium (LEU) design. Two transient scenarios were also investigated for the single-pin geometries. Both were “model” problems that were focused on pure thermal-hydraulic behavior, and as such were simple power changes that did not incorporate neutron kinetics modeling. The first scenario was a high-power, ramp increase, while the second scenario was a low-power, step increase. A cylindrical RELAP model was also constructed to investigate its accuracy as compared to the higher-fidelity CFD. Comparisons between the two codes showed good agreement for peak temperatures in the fuel and at the cladding surface for both cases. In the step transient, temperatures at four axial levels were also computed. These showed greater but reasonable discrepancy, with RELAP outputting higher temperatures. These results provide some evidence that RELAP can be used with confidence in modeling transients for IVG.« less

  11. Life and reliability models for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Knorr, R. J.; Coy, J. J.

    1982-01-01

    Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.

  12. Predicting the synaptic information efficacy in cortical layer 5 pyramidal neurons using a minimal integrate-and-fire model.

    PubMed

    London, Michael; Larkum, Matthew E; Häusser, Michael

    2008-11-01

    Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.

  13. [Discrimination of donkey meat by NIR and chemometrics].

    PubMed

    Niu, Xiao-Ying; Shao, Li-Min; Dong, Fang; Zhao, Zhi-Lei; Zhu, Yan

    2014-10-01

    Donkey meat samples (n = 167) from different parts of donkey body (neck, costalia, rump, and tendon), beef (n = 47), pork (n = 51) and mutton (n = 32) samples were used to establish near-infrared reflectance spectroscopy (NIR) classification models in the spectra range of 4,000~12,500 cm(-1). The accuracies of classification models constructed by Mahalanobis distances analysis, soft independent modeling of class analogy (SIMCA) and least squares-support vector machine (LS-SVM), respectively combined with pretreatment of Savitzky-Golay smooth (5, 15 and 25 points) and derivative (first and second), multiplicative scatter correction and standard normal variate, were compared. The optimal models for intact samples were obtained by Mahalanobis distances analysis with the first 11 principal components (PCs) from original spectra as inputs and by LS-SVM with the first 6 PCs as inputs, and correctly classified 100% of calibration set and 98. 96% of prediction set. For minced samples of 7 mm diameter the optimal result was attained by LS-SVM with the first 5 PCs from original spectra as inputs, which gained an accuracy of 100% for calibration and 97.53% for prediction. For minced diameter of 5 mm SIMCA model with the first 8 PCs from original spectra as inputs correctly classified 100% of calibration and prediction. And for minced diameter of 3 mm Mahalanobis distances analysis and SIMCA models both achieved 100% accuracy for calibration and prediction respectively with the first 7 and 9 PCs from original spectra as inputs. And in these models, donkey meat samples were all correctly classified with 100% either in calibration or prediction. The results show that it is feasible that NIR with chemometrics methods is used to discriminate donkey meat from the else meat.

  14. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology

    NASA Astrophysics Data System (ADS)

    Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.

    2012-04-01

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.

  15. Dynamic Modeling Strategy for Flow Regime Transition in Gas-Liquid Two-Phase Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia Wang; Xiaodong Sun; Benjamin Doup

    In modeling gas-liquid two-phase flows, the concept of flow regimes has been widely used to characterize the global interfacial structure of the flows. Nearly all constitutive relations that provide closures to the interfacial transfers in two-phase flow models, such as the two-fluid model, are flow regime dependent. Current nuclear reactor safety analysis codes, such as RELAP5, classify flow regimes using flow regime maps or transition criteria that were developed for steady-state, fully-developed flows. As twophase flows are dynamic in nature, it is important to model the flow regime transitions dynamically to more accurately predict the two-phase flows. The present workmore » aims to develop a dynamic modeling strategy to determine flow regimes in gas-liquid two-phase flows through introduction of interfacial area transport equations (IATEs) within the framework of a two-fluid model. The IATE is a transport equation that models the interfacial area concentration by considering the creation of the interfacial area, fluid particle (bubble or liquid droplet) disintegration, boiling and evaporation, and the destruction of the interfacial area, fluid particle coalescence and condensation. For flow regimes beyond bubbly flows, a two-group IATE has been proposed, in which bubbles are divided into two groups based on their size and shapes, namely group-1 and group-2 bubbles. A preliminary approach to dynamically identify the flow regimes is discussed, in which discriminator s are based on the predicted information, such as the void fraction and interfacial area concentration. The flow regime predicted with this method shows good agreement with the experimental observations.« less

  16. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    NASA Astrophysics Data System (ADS)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  17. Investigation of Effects of Varying Model Inputs on Mercury Deposition Estimates in the Southwest US

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US). The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (E...

  18. Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeHart, Mark; Baker, Benjamin; Ortensi, Javier

    Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less

  19. Improvement of Meteorological Inputs for TexAQS-II Air Quality Simulations

    NASA Astrophysics Data System (ADS)

    Ngan, F.; Byun, D.; Kim, H.; Cheng, F.; Kim, S.; Lee, D.

    2008-12-01

    An air quality forecasting system (UH-AQF) for Eastern Texas, which is in operation by the Institute for Multidimensional Air Quality Studies (IMAQS) at the University of Houston, uses the Fifth-Generation PSU/NCAR Mesoscale Model MM5 model as the meteorological driver for modeling air quality with the Community Multiscale Air Quality (CMAQ) model. While the forecasting system was successfully used for the planning and implementation of various measurement activities, evaluations of the forecasting results revealed a few systematic problems in the numerical simulations. From comparison with observations, we observe some times over-prediction of northerly winds caused by inaccurate synoptic inputs and other times too strong southerly winds caused by local sea breeze development. Discrepancies in maximum and minimum temperature are also seen for certain days. Precipitation events, as well as clouds, are simulated at the incorrect locations and times occasionally. Model simulatednrealistic thunderstorms are simulated, causing sometimes cause unrealistically strong outflows. To understand physical and chemical processes influencing air quality measures, a proper description of real world meteorological conditions is essential. The objective of this study is to generate better meteorological inputs than the AQF results to support the chemistry modeling. We utilized existing objective analysis and nudging tools in the MM5 system to develop the MUltiscale Nest-down Data Assimilation System (MUNDAS), which incorporates extensive meteorological observations available in the simulated domain for the retrospective simulation of the TexAQS-II period. With the re-simulated meteorological input, we are able to better predict ozone events during TexAQS-II period. In addition, base datasets in MM5 such as land use/land cover, vegetation fraction, soil type and sea surface temperature are updated by satellite data to represent the surface features more accurately. They are key physical parameters inputs affecting transfer of heat, momentum and soil moisture in land-surface process in MM5. Using base the accurate input datasets, we are able to have improved see the differences of predictions of ground temperatures, winds and even thunderstorm activities within boundary layer.

  20. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons

    PubMed Central

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-01-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. PMID:26907675

  1. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  2. DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James; Bayless, Paul; Strydom, Gerhard

    2016-11-01

    Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less

  3. Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.

    Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less

  4. Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion

    DOE PAGES

    Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.

    2018-03-20

    Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less

  5. NEAMS update quarterly report for January - March 2012.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.S.; Hayes, S.; Pointer, D.

    Quarterly highlights are: (1) The integration of Denovo and AMP was demonstrated in an AMP simulation of the thermo-mechanics of a complete fuel assembly; (2) Bison was enhanced with a mechanistic fuel cracking model; (3) Mechanistic algorithms were incorporated into various lower-length-scale models to represent fission gases and dislocations in UO2 fuels; (4) Marmot was improved to allow faster testing of mesoscale models using larger problem domains; (5) Component models of reactor piping were developed for use in Relap-7; (6) The mesh generator of Proteus was updated to accept a mesh specification from Moose and equations were formulated for themore » intermediate-fidelity Proteus-2D1D module; (7) A new pressure solver was implemented in Nek5000 and demonstrated to work 2.5 times faster than the previous solver; (8) Work continued on volume-holdup models for two fuel reprocessing operations: voloxidation and dissolution; (9) Progress was made on a pyroprocessing model and the characterization of pyroprocessing emission signatures; (10) A new 1D groundwater waste transport code was delivered to the used fuel disposition (UFD) campaign; (11) Efforts on waste form modeling included empirical simulation of sodium-borosilicate glass compositions; (12) The Waste team developed three prototypes for modeling hydride reorientation in fuel cladding during very long-term fuel storage; (13) A benchmark demonstration problem (fission gas bubble growth) was modeled to evaluate the capabilities of different meso-scale numerical methods; (14) Work continued on a hierarchical up-scaling framework to model structural materials by directly coupling dislocation dynamics and crystal plasticity; (15) New 'importance sampling' methods were developed and demonstrated to reduce the computational cost of rare-event inference; (16) The survey and evaluation of existing data and knowledge bases was updated for NE-KAMS; (17) The NEAMS Early User Program was launched; (18) The Nuclear Regulatory Commission (NRC) Office of Regulatory Research was introduced to the NEAMS program; (19) The NEAMS overall software quality assurance plan (SQAP) was revised to version 1.5; and (20) Work continued on NiCE and its plug-ins and other utilities, such as Cubit and VisIt.« less

  6. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  7. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    PubMed

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  8. A novel cost-effectiveness model of prescription eicosapentaenoic acid extrapolated to secondary prevention of cardiovascular diseases in the United States.

    PubMed

    Philip, Sephy; Chowdhury, Sumita; Nelson, John R; Benjamin Everett, P; Hulme-Lowe, Carolyn K; Schmier, Jordana K

    2016-10-01

    Given the substantial economic and health burden of cardiovascular disease and the residual cardiovascular risk that remains despite statin therapy, adjunctive therapies are needed. The purpose of this model was to estimate the cost-effectiveness of high-purity prescription eicosapentaenoic acid (EPA) omega-3 fatty acid intervention in secondary prevention of cardiovascular diseases in statin-treated patient populations extrapolated to the US. The deterministic model utilized inputs for cardiovascular events, costs, and utilities from published sources. Expert opinion was used when assumptions were required. The model takes the perspective of a US commercial, third-party payer with costs presented in 2014 US dollars. The model extends to 5 years and applies a 3% discount rate to costs and benefits. Sensitivity analyses were conducted to explore the influence of various input parameters on costs and outcomes. Using base case parameters, EPA-plus-statin therapy compared with statin monotherapy resulted in cost savings (total 5-year costs $29,393 vs $30,587 per person, respectively) and improved utilities (average 3.627 vs 3.575, respectively). The results were not sensitive to multiple variations in model inputs and consistently identified EPA-plus-statin therapy to be the economically dominant strategy, with both lower costs and better patient utilities over the modeled 5-year period. The model is only an approximation of reality and does not capture all complexities of a real-world scenario without further inputs from ongoing trials. The model may under-estimate the cost-effectiveness of EPA-plus-statin therapy because it allows only a single event per patient. This novel model suggests that combining EPA with statin therapy for secondary prevention of cardiovascular disease in the US may be a cost-saving and more compelling intervention than statin monotherapy.

  9. A model of TMS-induced I-waves in motor cortex.

    PubMed

    Rusu, Cătălin V; Murakami, Max; Ziemann, Ulf; Triesch, Jochen

    2014-01-01

    Transcranial magnetic stimulation (TMS) allows to manipulate neural activity non-invasively, and much research is trying to exploit this ability in clinical and basic research settings. In a standard TMS paradigm, single-pulse stimulation over motor cortex produces repetitive responses in descending motor pathways called I-waves. However, the details of how TMS induces neural activity patterns in cortical circuits to produce these responses remain poorly understood. According to a traditional view, I-waves are due to repetitive synaptic inputs to pyramidal neurons in layer 5 (L5) of motor cortex, but the potential origin of such repetitive inputs is unclear. Here we aim to test the plausibility of an alternative mechanism behind D- and I-wave generation through computational modeling. This mechanism relies on the broad distribution of conduction delays of synaptic inputs arriving at different parts of L5 cells' dendritic trees and their spike generation mechanism. Our model consists of a detailed L5 pyramidal cell and a population of layer 2 and 3 (L2/3) neurons projecting onto it with synapses exhibiting short-term depression. I-waves are simulated as superpositions of spike trains from a large population of L5 cells. Our model successfully reproduces all basic characteristics of I-waves observed in epidural responses during in vivo recordings of conscious humans. In addition, it shows how the complex morphology of L5 neurons might play an important role in the generation of I-waves. In the model, later I-waves are formed due to inputs to distal synapses, while earlier ones are driven by synapses closer to the soma. Finally, the model offers an explanation for the inhibition and facilitation effects in paired-pulse stimulation protocols. In contrast to previous models, which required either neural oscillators or chains of inhibitory interneurons acting upon L5 cells, our model is fully feed-forward without lateral connections or loops. It parsimoniously explains findings from a range of experiments and should be considered as a viable alternative explanation of the generating mechanism of I-waves. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology.

    PubMed

    Tomasi, G; Kimberley, S; Rosso, L; Aboagye, E; Turkheimer, F

    2012-04-07

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [¹¹C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[¹⁸F]fluorouracil (5-[¹⁸F]FU) and [¹⁸F]fluorothymidine ([¹⁸F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[¹⁸F]FU and to tumor, vertebra and liver for [¹⁸F]FLT were analyzed. For 5-[¹⁸F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[¹⁸F]FU (R(2) = 0.91) and metabolite [¹⁸F]FBAL (R(2) = 0.99). For [¹⁸F]FLT, the DI methods provided notable improvements but less substantial than for 5-[¹⁸F]FU due to the lower rate of metabolism of [¹⁸F]FLT. On the basis of the AIC values, agreement between [¹⁸F]FLT K(i) estimated with the SI and DI models was good (R² = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [¹⁸F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R² = 0.33 for K(i)). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.

  11. Pretest analysis document for Test S-NH-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owca, W.A.

    This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY3601 code for Semiscale MOD-2C Test S-NH-1. The test will simulate the shear of a small diameter penetration of a cold leg, equivalent to 0.5% of the cold leg flow area. The high pressure injection system is assumed to be inoperative throughout the transient. The recovery procedure consists of latching open both steam generator ADV's while feeding with auxiliary feedwater, and accumulator operation. Recovery will be initiated upon a peak cladding temperature of 811 K (1000/sup 0/F). The test will be terminated when primary pressure has been reduced to themore » low pressure injection system setpoint of 1.38 MPa (200 psia). The calculated results indicate that the test objectives can be achieved and the proposed test scenario poses no threat to personnel or to plant integrity. 12 figs.« less

  12. Factors Affecting Energy Absorption of a Plate during Shock Wave Impact Using a Damage Material Model

    DTIC Science & Technology

    2010-08-07

    51 5.3.2 Abaqus VDLOAD Subroutine ............................................. 55 VI. INTERPRETATION OF RESULTS AND DISCUSSION...VDLOAD SUBROUTINE ........................................................... 91 C. PYTHON SCRIPT TO CONVERT ABAQUS INPUT FILE TO LS-DYNA INPUT FILE...all of the simulations, which are the pressures applied from the Abaqus /Explicit VDLOAD subroutine . The entire model 22 including the boundary

  13. Late-phase synthesis of IκBα insulates the TLR4-activated canonical NF-κB pathway from noncanonical NF-κB signaling in macrophages

    PubMed Central

    Mukherjee, Tapas; Taye, Nandaraj; Vijayaragavan, Bharath; Chattopadhyay, Samit; Gomes, James; Basak, Soumen

    2017-01-01

    The nuclear factor κB (NF-κB) transcription factors coordinate the inflammatory immune response during microbial infection. Pathogenic substances engage canonical NF-κB signaling through the heterodimer RelA:p50, which is subjected to rapid negative feedback by inhibitor of κBα (IκBα). The noncanonical NF-κB pathway is required for the differentiation of immune cells; however, crosstalk between both pathways can occur. Concomitantly activated noncanonical signaling generates p52 from the p100 precursor. The synthesis of p100 is induced by canonical signaling, leading to formation of the late-acting RelA:p52 heterodimer. This crosstalk prolongs inflammatory RelA activity in epithelial cells to ensure pathogen clearance. We found that the Toll-like receptor 4 (TLR4)–activated canonical NF-κB signaling pathway is insulated from lymphotoxin β receptor (LTβR)–induced noncanonical signaling in mouse macrophage cell lines. Combined computational and biochemical studies indicated that the extent of NF-κB–responsive expression of Nfkbia, which encodes IκBα, inversely correlated with crosstalk. The Nfkbia promoter showed enhanced responsiveness to NF-κB activation in macrophages compared to that in fibroblasts. We found that this hyperresponsive promoter engaged the RelA:p52 dimer generated during costimulation of macrophages through TLR4 and LTβR to trigger synthesis of IκBα at late time points, which prevented the late-acting RelA crosstalk response. Together, these data suggest that despite the presence of identical signaling networks in cells of diverse lineages, emergent crosstalk between signaling pathways is subject to cell type–specific regulation. We propose that the insulation of canonical and noncanonical NF-κB pathways limits the deleterious effects of macrophage-mediated inflammation. PMID:27923915

  14. The Temporal Tuning of the Drosophila Motion Detectors Is Determined by the Dynamics of Their Input Elements.

    PubMed

    Arenz, Alexander; Drews, Michael S; Richter, Florian G; Ammer, Georg; Borst, Alexander

    2017-04-03

    Detecting the direction of motion contained in the visual scene is crucial for many behaviors. However, because single photoreceptors only signal local luminance changes, motion detection requires a comparison of signals from neighboring photoreceptors across time in downstream neuronal circuits. For signals to coincide on readout neurons that thus become motion and direction selective, different input lines need to be delayed with respect to each other. Classical models of motion detection rely on non-linear interactions between two inputs after different temporal filtering. However, recent studies have suggested the requirement for at least three, not only two, input signals. Here, we comprehensively characterize the spatiotemporal response properties of all columnar input elements to the elementary motion detectors in the fruit fly, T4 and T5 cells, via two-photon calcium imaging. Between these input neurons, we find large differences in temporal dynamics. Based on this, computer simulations show that only a small subset of possible arrangements of these input elements maps onto a recently proposed algorithmic three-input model in a way that generates a highly direction-selective motion detector, suggesting plausible network architectures. Moreover, modulating the motion detection system by octopamine-receptor activation, we find the temporal tuning of T4 and T5 cells to be shifted toward higher frequencies, and this shift can be fully explained by the concomitant speeding of the input elements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  16. Sensitivity of the Community Multiscale Air Quality (CMAQ) Model v4.7 Results for the Eastern United States to MM5 and WRF Meteorological Drivers

    EPA Science Inventory

    This paper presents a comparison of the operational performance of two Community Multiscale Air Quality (CMAQ) model v4.7 simulations that utilize input data from the 5th generation Mesoscale Model MM5 and the Weather Research and Forecasting (WRF) meteorological models.

  17. A Comparison of Crop Yields Using El Nino and Non-El Nino Climatological Data in a Crop Model

    DTIC Science & Technology

    1990-01-01

    Limiting 4. Irrigation Inputs and Water Balance Switch 5. Fertili =er Inputs 6. Sele: New Variety 7. Soi: Prof’le inou:s (Water Salance,Root Pre...PLANTS/SG METE GENETIC SPECIFIC CONSTANTS 6fq E2 =10.8 0 0 P5= 6 8 5 .0 0 1 ..-’E DA’ CuLOT-6NWCON= .30 RUNOFF CURVE NO.= 79.0 Press "Enter" to continue

  18. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.

    2010-08-15

    The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less

  19. Piloted Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.

  20. Modeling and Simulation of the ITER First Wall/Blanket Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ying, Alice; Popov, Emilian L

    2011-01-01

    ITER inductive power operation is modeled and simulated using a thermal-hydraulics system code (RELAP5) integrated with a 3-D CFD (SC-Tetra) code. The Primary Heat Transfer System (PHTS) functions are predicted together with the main parameters operational ranges. The control algorithm strategy and derivation are summarized as well. The First Wall and Blanket modules are the primary components of PHTS, used to remove the major part of the thermal heat from the plasma. The modules represent a set of flow channels in solid metal structure that serve to absorb the radiation heat and nuclear heating from the fusion reactions and tomore » provide shield for the vacuum vessel. The blanket modules are water cooled. The cooling is forced convective with constant blanket inlet temperature and mass flow rate. Three independent water loops supply coolant to the three blanket sectors. The main equipment of each loop consists of a pump, a steam pressurizer and a heat exchanger. A major feature of ITER is the pulsed operation. The plasma does not burn continuously, but on intervals with large periods of no power between them. This specific feature causes design challenges to accommodate the thermal expansion of the coolant during the pulse period and requires active temperature control to maintain a constant blanket inlet temperature.« less

  1. Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs

    NASA Astrophysics Data System (ADS)

    Harvey, David Benjamin Paul

    A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.

  2. Noise Exposure Model MOD-5 : Volume 1

    DOT National Transportation Integrated Search

    1971-06-01

    The report contains three sections. The first two sections are contained in Volume 1. It contains an airport analysis which describes the noise exposure model MOD-5 from the perspective of analysing an airport in order to develop the program input mo...

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis; Mandelli, Diego; Prescott, Steven

    The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates. In order to evaluate the impact of these factors on the safety of the plant, the Risk Informed Safety Margin Characterization (RISMC) project aims to provide insight to decision makers through a series of simulations of the plant dynamics for different initial conditions (e.g., probabilistic analysis and uncertainty quantification). This report focuses, in particular, on the application of a RISMC detailed demonstration case study for an emergent issue using the RAVEN and RELAP-7 tools.more » This case study looks at the impact of a couple of challenges to a hypothetical pressurized water reactor, including: (1) a power uprate, (2) a potential loss of off-site power followed by the possible loss of all diesel generators (i.e., a station black-out event), (3) and earthquake induces station-blackout, and (4) a potential earthquake induced tsunami flood. The analysis is performed by using a set of codes: a thermal-hydraulic code (RELAP-7), a flooding simulation tool (NEUTRINO) and a stochastic analysis tool (RAVEN) – these are currently under development at the Idaho National Laboratory.« less

  4. Impact of input data uncertainty on environmental exposure assessment models: A case study for electromagnetic field modelling from mobile phone base stations.

    PubMed

    Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel

    2014-11-01

    With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Modeling Nearshore Waves for Hurricane Katrina

    DTIC Science & Technology

    2007-08-01

    sensitivity of the STWAVE results to critical input, three sets of sensitivity runs were made: wind input, degradation of the Chandeleurs Islands, and...is approximately 0.2 to 0.3 m. There are larger differences outside the Chandeleurs (increase of 0.6 – 0.9 m for the plus 5 percent winds and 0.5...possible exception to this is wave attenuation across the barrier islands, which protect the areas in their shadow. The Chandeleur Islands

  6. Flight Model Discharge System.

    DTIC Science & Technology

    1987-04-01

    will immediately remove the charge from the front surface of the dielectric and return it to ground. The 2-hour time constant network will then reset the...ATDP programs. NEWT5 permits the digitized input of board and component position data, while ATDP automates certain phases of input and output table...format. 8.5 RESULTS The system-level results are presented as curves of AR (normalized radiator area) versus THOT and as curves of Q (heater

  7. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  8. Application of image-derived and venous input functions in major depression using [carbonyl-(11)C]WAY-100635.

    PubMed

    Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Wadsak, Wolfgang; Savli, Markus; Kraus, Christoph; Birkfellner, Wolfgang; Ungersboeck, Johanna; Haeusler, Daniela; Mitterhauser, Markus; Karanikas, Georgios; Kasper, Siegfried; Frey, Richard; Lanzenberger, Rupert

    2013-04-01

    Image-derived input functions (IDIFs) represent a promising non-invasive alternative to arterial blood sampling for quantification in positron emission tomography (PET) studies. However, routine applications in patients and longitudinal designs are largely missing despite widespread attempts in healthy subjects. The aim of this study was to apply a previously validated approach to a clinical sample of patients with major depressive disorder (MDD) before and after electroconvulsive therapy (ECT). Eleven scans from 5 patients with venous blood sampling were obtained with the radioligand [carbonyl-(11)C]WAY-100635 at baseline, before and after 11.0±1.2 ECT sessions. IDIFs were defined by two different image reconstruction algorithms 1) OSEM with subsequent partial volume correction (OSEM+PVC) and 2) reconstruction based modelling of the point spread function (TrueX). Serotonin-1A receptor (5-HT1A) binding potentials (BPP, BPND) were quantified with a two-tissue compartment (2TCM) and reference region model (MRTM2). Compared to MRTM2, good agreement in 5-HT1A BPND was found when using input functions from OSEM+PVC (R(2)=0.82) but not TrueX (R(2)=0.57, p<0.001), which is further reflected by lower IDIF peaks for TrueX (p<0.001). Following ECT, decreased 5-HT1A BPND and BPP were found with the 2TCM using OSEM+PVC (23%-35%), except for one patient showing only subtle changes. In contrast, MRTM2 and IDIFs from TrueX gave unstable results for this patient, most probably due to a 2.4-fold underestimation of non-specific binding. Using image-derived and venous input functions defined by OSEM with subsequent PVC we confirm previously reported decreases in 5-HT1A binding in MDD patients after ECT. In contrast to reference region modeling, quantification with image-derived input functions showed consistent results in a clinical setting due to accurate modeling of non-specific binding with OSEM+PVC. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Using nonlinear forecasting to learn the magnitude and phasing of time-varying sediment suspension in the surf zone

    USGS Publications Warehouse

    Jaffe, B.E.; Rubin, D.M.

    1996-01-01

    The time-dependent response of sediment suspension to flow velocity was explored by modeling field measurements collected in the surf zone during a large storm. Linear and nonlinear models were created and tested using flow velocity as input and suspended-sediment concentration as output. A sequence of past velocities (velocity history), as well as velocity from the same instant as the suspended-sediment concentration, was used as input; this velocity history length was allowed to vary. The models also allowed for a lag between input (instantaneous velocity or end of velocity sequence) and output (suspended-sediment concentration). Predictions of concentration from instantaneous velocity or instantaneous velocity raised to a power (up to 8) using linear models were poor (correlation coefficients between predicted and observed concentrations were less than 0.10). Allowing a lag between velocity and concentration improved linear models (correlation coefficient of 0.30), with optimum lag time increasing with elevation above the seabed (from 1.5 s at 13 cm to 8.5 s at 60 cm). These lags are largely due to the time for an observed flow event to effect the bed and mix sediment upward. Using a velocity history further improved linear models (correlation coefficient of 0.43). The best linear model used 12.5 s of velocity history (approximately one wave period) to predict concentration. Nonlinear models gave better predictions than linear models, and, as with linear models, nonlinear models using a velocity history performed better than models using only instantaneous velocity as input. Including a lag time between the velocity and concentration also improved the predictions. The best model (correlation coefficient of 0.58) used 3 s (approximately a quarter wave period) of the cross-shore velocity squared, starting at 4.5 s before the observed concentration, to predict concentration. Using a velocity history increases the performance of the models by specifying a more complete description of the dynamical forcing of the flow (including accelerations and wave phase and shape) responsible for sediment suspension. Incorporating such a velocity history and a lag time into the formulation of the forcing for time-dependent models for sediment suspension in the surf zone will greatly increase our ability to predict suspended-sediment transport.

  10. ORCHIMIC (v1.0), a microbe-mediated model for soil organic matter decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Guenet, Bertrand; Ciais, Philippe; Janssens, Ivan A.; Soong, Jennifer L.; Wang, Yilong; Goll, Daniel; Blagodatskaya, Evgenia; Huang, Yuanyuan

    2018-06-01

    The role of soil microorganisms in regulating soil organic matter (SOM) decomposition is of primary importance in the carbon cycle, in particular in the context of global change. Modeling soil microbial community dynamics to simulate its impact on soil gaseous carbon (C) emissions and nitrogen (N) mineralization at large spatial scales is a recent research field with the potential to improve predictions of SOM responses to global climate change. In this study we present a SOM model called ORCHIMIC, which utilizes input data that are consistent with those of global vegetation models. ORCHIMIC simulates the decomposition of SOM by explicitly accounting for enzyme production and distinguishing three different microbial functional groups: fresh organic matter (FOM) specialists, SOM specialists, and generalists, while also implicitly accounting for microbes that do not produce extracellular enzymes, i.e., cheaters. ORCHIMIC and two other organic matter decomposition models, CENTURY (based on first-order kinetics and representative of the structure of most current global soil carbon models) and PRIM (with FOM accelerating the decomposition rate of SOM), were calibrated to reproduce the observed respiration fluxes of FOM and SOM from the incubation experiments of Blagodatskaya et al. (2014). Among the three models, ORCHIMIC was the only one that effectively captured both the temporal dynamics of the respiratory fluxes and the magnitude of the priming effect observed during the incubation experiment. ORCHIMIC also effectively reproduced the temporal dynamics of microbial biomass. We then applied different idealized changes to the model input data, i.e., a 5 K stepwise increase of temperature and/or a doubling of plant litter inputs. Under 5 K warming conditions, ORCHIMIC predicted a 0.002 K-1 decrease in the C use efficiency (defined as the ratio of C allocated to microbial growth to the sum of C allocated to growth and respiration) and a 3 % loss of SOC. Under the double litter input scenario, ORCHIMIC predicted a doubling of microbial biomass, while SOC stock increased by less than 1 % due to the priming effect. This limited increase in SOC stock contrasted with the proportional increase in SOC stock as modeled by the conventional SOC decomposition model (CENTURY), which can not reproduce the priming effect. If temperature increased by 5 K and litter input was doubled, ORCHIMIC predicted almost the same loss of SOC as when only temperature was increased. These tests suggest that the responses of SOC stock to warming and increasing input may differ considerably from those simulated by conventional SOC decomposition models when microbial dynamics are included. The next step is to incorporate the ORCHIMIC model into a global vegetation model to perform simulations for representative sites and future scenarios.

  11. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    PubMed

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  12. Macro Level Simulation Model Of Space Shuttle Processing

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.

  13. Wildcat5 for Windows, a rainfall-runoff hydrograph model: user manual and documentation

    Treesearch

    R. H. Hawkins; A. Barreto-Munoz

    2016-01-01

    Wildcat5 for Windows (Wildcat5) is an interactive Windows Excel-based software package designed to assist watershed specialists in analyzing rainfall runoff events to predict peak flow and runoff volumes generated by single-event rainstorms for a variety of watershed soil and vegetation conditions. Model inputs are: (1) rainstorm characteristics, (2) parameters related...

  14. Contribution of Cage-Shaped Structure of Physalins to Their Mode of Action in Inhibition of NF-κB Activation.

    PubMed

    Ozawa, Masaaki; Morita, Masaki; Hirai, Go; Tamura, Satoru; Kawai, Masao; Tsuchiya, Ayako; Oonuma, Kana; Maruoka, Keiji; Sodeoka, Mikiko

    2013-08-08

    A library of oxygenated natural steroids, including physalins, withanolides, and perulactones, coupled with the synthetic cage-shaped right-side structure of type B physalins, was constructed. SAR studies for inhibition of NF-κB activation showed the importance of both the B-ring and the oxygenated right-side partial structure. The 5β,6β-epoxy derivatives of both physalins and withanolides showed similar profiles of inhibition of NF-κB activation and appeared to act on NF-κB signaling via inhibition of phosphorylation and degradation of IκBα. In contrast, type B physalins with C5-C6 olefin functionality inhibited nuclear translocation and DNA binding of RelA/p50 protein dimer, which lie downstream of IκBα degradation, although withanolides having the same AB-ring functionality did not. These results indicated that the right-side partial structure of these steroids influences their mode of action.

  15. An investigation on generalization ability of artificial neural networks and M5 model tree in modeling reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Kilic, Yasin

    2016-11-01

    The generalization ability of artificial neural networks (ANNs) and M5 model tree (M5Tree) in modeling reference evapotranspiration ( ET 0 ) is investigated in this study. Daily climatic data, average temperature, solar radiation, wind speed, and relative humidity from six different stations operated by California Irrigation Management Information System (CIMIS) located in two different regions of the USA were used in the applications. King-City Oasis Rd., Arroyo Seco, and Salinas North stations are located in San Joaquin region, and San Luis Obispo, Santa Monica, and Santa Barbara stations are located in the Southern region. In the first part of the study, the ANN and M5Tree models were used for estimating ET 0 of six stations and results were compared with the empirical methods. The ANN and M5Tree models were found to be better than the empirical models. In the second part of the study, the ANN and M5Tree models obtained from one station were tested using the data from the other two stations for each region. ANN models performed better than the CIMIS Penman, Hargreaves, Ritchie, and Turc models in two stations while the M5Tree models generally showed better accuracy than the corresponding empirical models in all stations. In the third part of the study, the ANN and M5Tree models were calibrated using three stations located in San Joaquin region and tested using the data from the other three stations located in the Southern region. Four-input ANN and M5Tree models performed better than the CIMIS Penman in only one station while the two-input ANN models were found to be better than the Hargreaves, Ritchie, and Turc models in two stations.

  16. Pretest analysis document for Test S-FS-6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, R.A.; Hall, D.G.

    This report documents the pretest analyses completed for Semiscale Test S-FS-6. This test will simulate a transient initiated by a 100% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The enclosed analyses include a RELAP5/MOD2/CY21 code calculation and preliminary results from a facility hot, integrated test which was conducted to near S-FS-6 specifications. The results of these analyses indicate thatmore » the test objectives for Test S-FS-6 can be achieved. The primary system overpressurization will pose no threat to personnel or plant integrity.« less

  17. Pretest analysis document for Semiscale Test S-FS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.H.

    This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY21 code for Semiscale Test S-FS-1. The test will simulate the double-ended offset shear of the main steam line at the exit of the broken loop steam generator (downstream of the flow restrictor) and the subsequent plant recovery. The recovery portion of the test consists of a plant stabilization phase and a plant cooldown phase. The recovery procedures involve normal charging/letdown operation, pressurizer heater operation, secondary steam and feed of the unaffected steam generator, and pressurizer auxiliary spray. The test will be terminated after the unaffected steam generator and pressurizermore » pressures and liquid levels are stable, and the average priamry fluid temperature is stable at about 480 K (405/sup 0/F) for at least 10 minutes.« less

  18. Results from a scaled reactor cavity cooling system with water at steady state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lisowski, D. D.; Albiston, S. M.; Tokuhiro, A.

    We present a summary of steady-state experiments performed with a scaled, water-cooled Reactor Cavity Cooling System (RCCS) at the Univ. of Wisconsin - Madison. The RCCS concept is used for passive decay heat removal in the Next Generation Nuclear Plant (NGNP) design and was based on open literature of the GA-MHTGR, HTR-10 and AVR reactor. The RCCS is a 1/4 scale model of the full scale prototype system, with a 7.6 m structure housing, a 5 m tall test section, and 1,200 liter water storage tank. Radiant heaters impose a heat flux onto a three riser tube test section, representingmore » a 5 deg. radial sector of the actual 360 deg. RCCS design. The maximum heat flux and power levels are 25 kW/m{sup 2} and 42.5 kW, and can be configured for variable, axial, or radial power profiles to simulate prototypic conditions. Experimental results yielded measurements of local surface temperatures, internal water temperatures, volumetric flow rates, and pressure drop along the test section and into the water storage tank. The majority of the tests achieved a steady state condition while remaining single-phase. A selected number of experiments were allowed to reach saturation and subsequently two-phase flow. RELAP5 simulations with the experimental data have been refined during test facility development and separate effects validation of the experimental facility. This test series represents the completion of our steady-state testing, with future experiments investigating normal and off-normal accident scenarios with two-phase flow effects. The ultimate goal of the project is to combine experimental data from UW - Madison, UI, ANL, and Texas A and M, with system model simulations to ascertain the feasibility of the RCCS as a successful long-term heat removal system during accident scenarios for the NGNP. (authors)« less

  19. The COSIMA experiments and their verification, a data base for the validation of two phase flow computer codes

    NASA Astrophysics Data System (ADS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.

  20. Transient Simulation of the Multi-SERTTA Experiment with MAMMOTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, Javier; Baker, Benjamin; Wang, Yaqi

    This work details the MAMMOTH reactor physics simulations of the Static Environment Rodlet Transient Test Apparatus (SERTTA) conducted at Idaho National Laboratory in FY-2017. TREAT static-environment experiment vehicles are being developed to enable transient testing of Pressurized Water Reactor (PWR) type fuel specimens, including fuel concepts with enhanced accident tolerance (Accident Tolerant Fuels, ATF). The MAMMOTH simulations include point reactor kinetics as well as spatial dynamics for a temperature-limited transient. The strongly coupled multi-physics solutions of the neutron flux and temperature fields are second order accurate both in the spatial and temporal domains. MAMMOTH produces pellet stack powers that are within 1.5% of the Monte Carlo reference solutions. Some discrepancies between the MCNP model used in the design of the flux collars and the Serpent/MAMMOTH models lead to higher power and energy deposition values in Multi-SERTTA unit 1. The TREAT core results compare well with the safety case computed with point reactor kinetics in RELAP5-3D. The reactor period is 44 msec, which corresponds to a reactivity insertion of 2.685% delta k/kmore » $. The peak core power in the spatial dynamics simulation is 431 MW, which the point kinetics model over-predicts by 12%. The pulse width at half the maximum power is 0.177 sec. Subtle transient effects are apparent at the beginning insertion in the experimental samples due to the control rod removal. Additional difference due to transient effects are observed in the sample powers and enthalpy. The time dependence of the power coupling factor (PCF) is calculated for the various fuel stacks of the Multi-SERTTA vehicle. Sample temperatures in excess of 3100 K, the melting point UO$$_2$$, are computed with the adiabatic heat transfer model. The planned shaped-transient might introduce additional effects that cannot be predicted with PRK models. Future modeling will be focused on the shaped-transient by improving the control rod models in MAMMOTH and adding the BISON thermo-elastic models and thermal-fluids heat transfer.« less

  1. Preform Characterization in VARTM Process Model Development

    NASA Technical Reports Server (NTRS)

    Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal; Loos, Alfred C.; Kellen, Charles B.; Jensen, Brian J.

    2004-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) is a Liquid Composite Molding (LCM) process where both resin injection and fiber compaction are achieved under pressures of 101.3 kPa or less. Originally developed over a decade ago for marine composite fabrication, VARTM is now considered a viable process for the fabrication of aerospace composites (1,2). In order to optimize and further improve the process, a finite element analysis (FEA) process model is being developed to include the coupled phenomenon of resin flow, preform compaction and resin cure. The model input parameters are obtained from resin and fiber-preform characterization tests. In this study, the compaction behavior and the Darcy permeability of a commercially available carbon fabric are characterized. The resulting empirical model equations are input to the 3- Dimensional Infiltration, version 5 (3DINFILv.5) process model to simulate infiltration of a composite panel.

  2. Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Batterson, James G. (Technical Monitor); Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.

  3. Documentation of model input and output values for simulation of pumping effects in Paradise Valley, a basin tributary to the Humboldt River, Humboldt County, Nevada

    USGS Publications Warehouse

    Carey, A.E.; Prudic, David E.

    1996-01-01

    Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.

  4. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-03-09

    This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less

  5. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less

  6. GEOS-5 Chemistry Transport Model User's Guide

    NASA Technical Reports Server (NTRS)

    Kouatchou, J.; Molod, A.; Nielsen, J. E.; Auer, B.; Putman, W.; Clune, T.

    2015-01-01

    The Goddard Earth Observing System version 5 (GEOS-5) General Circulation Model (GCM) makes use of the Earth System Modeling Framework (ESMF) to enable model configurations with many functions. One of the options of the GEOS-5 GCM is the GEOS-5 Chemistry Transport Model (GEOS-5 CTM), which is an offline simulation of chemistry and constituent transport driven by a specified meteorology and other model output fields. This document describes the basic components of the GEOS-5 CTM, and is a user's guide on to how to obtain and run simulations on the NCCS Discover platform. In addition, we provide information on how to change the model configuration input files to meet users' needs.

  7. Cost-effectiveness models for chronic obstructive pulmonary disease: cross-model comparison of hypothetical treatment scenarios.

    PubMed

    Hoogendoorn, Martine; Feenstra, Talitha L; Asukai, Yumi; Borg, Sixten; Hansen, Ryan N; Jansson, Sven-Arne; Samyshkin, Yevgeniy; Wacker, Margarethe; Briggs, Andrew H; Lloyd, Adam; Sullivan, Sean D; Rutten-van Mölken, Maureen P M H

    2014-07-01

    To compare different chronic obstructive pulmonary disease (COPD) cost-effectiveness models with respect to structure and input parameters and to cross-validate the models by running the same hypothetical treatment scenarios. COPD modeling groups simulated four hypothetical interventions with their model and compared the results with a reference scenario of no intervention. The four interventions modeled assumed 1) 20% reduction in decline in lung function, 2) 25% reduction in exacerbation frequency, 3) 10% reduction in all-cause mortality, and 4) all these effects combined. The interventions were simulated for a 5-year and lifetime horizon with standardization, if possible, for sex, age, COPD severity, smoking status, exacerbation frequencies, mortality due to other causes, utilities, costs, and discount rates. Furthermore, uncertainty around the outcomes of intervention four was compared. Seven out of nine contacted COPD modeling groups agreed to participate. The 5-year incremental cost-effectiveness ratios (ICERs) for the most comprehensive intervention, intervention four, was €17,000/quality-adjusted life-year (QALY) for two models, €25,000 to €28,000/QALY for three models, and €47,000/QALY for the remaining two models. Differences in the ICERs could mainly be explained by differences in input values for disease progression, exacerbation-related mortality, and all-cause mortality, with high input values resulting in low ICERs and vice versa. Lifetime results were mainly affected by the input values for mortality. The probability of intervention four to be cost-effective at a willingness-to-pay value of €50,000/QALY was 90% to 100% for five models and about 70% and 50% for the other two models, respectively. Mortality was the most important factor determining the differences in cost-effectiveness outcomes between models. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Evaluation of a Eulerian and Lagrangian air quality model using perfluorocarbon tracers released in Texas for the BRAVO haze study

    NASA Astrophysics Data System (ADS)

    Schichtel, Bret A.; Barna, Michael G.; Gebhart, Kristi A.; Malm, William C.

    The Big Bend Regional Aerosol and Visibility Observational (BRAVO) study was designed to determine the sources of haze at Big Bend National Park, Texas, using a combination of source and receptor models. BRAVO included an intensive monitoring campaign from July to October 1999 that included the release of perfluorocarbon tracers from four locations at distances 230-750 km from Big Bend and measured at 24 sites. The tracer measurements near Big Bend were used to evaluate the dispersion mechanisms in the REMSAD Eulerian model and the CAPITA Monte Carlo (CMC) Lagrangian model used in BRAVO. Both models used 36 km MM5 wind fields as input. The CMC model also used a combination of routinely available 80 and 190 km wind fields from the National Weather Service's National Centers for Environmental Prediction (NCEP) as input. A model's performance is limited by inherent uncertainties due to errors in the tracer concentrations and a model's inability to simulate sub-resolution variability. A range in the inherent uncertainty was estimated by comparing tracer data at nearby monitoring sites. It was found that the REMSAD and CMC models, using the MM5 wind field, produced performance statistics generally within this inherent uncertainty. The CMC simulation using the NCEP wind fields could reproduce the timing of tracer impacts at Big Bend, but not the concentration values, due to a systematic underestimation. It appears that the underestimation was partly due to excessive vertical dilution from high mixing depths. The model simulations were more sensitive to the input wind fields than the models' different dispersion mechanisms. Comparisons of REMSAD to CMC tracer simulations using the MM5 wind fields had correlations between 0.75 and 0.82, depending on the tracer, but the tracer simulations using the two wind fields in the CMC model had correlations between 0.37 and 0.5.

  9. Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems

    PubMed Central

    Shinozaki, Takahiro

    2018-01-01

    Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data. PMID:29425248

  10. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.

  11. Design Models for the Development of Helium-Carbon Sorption Crycoolers

    NASA Technical Reports Server (NTRS)

    Lindensmith, C. A.; Ahart, M.; Bhandari, P.; Wade, L. A.; Paine, C. G.

    2000-01-01

    We have developed models for predicting the performance of helium-based Joule-Thomson continuous-flow cryocoolers using charcoal-pumped sorption compressors. The models take as inputs the number of compressors, desired heat-lift, cold tip temperature, and available precooling temperature and provide design parameters as outputs. Future laboratory development will be used to verify and improve the models. We will present a preliminary design for a two-stage vibration-free cryocooler that is being proposed as part of a mid-infrared camera on NASA's Next Generation Space Telescope. Model predictions show that a 10 mW helium-carbon cryocooler with a base temperature of 5.5 K will reject less than 650 mW at 18 K. The total input power to the helium-carbon stage is 650 mW. These models, which run in MathCad and Microsoft Excel, can be coupled to similar models for hydrogen sorption coolers to give designs for 2-stage vibration-free cryocoolers that provide cooling from approx. 50 K to 4 K.

  12. Design Models for the Development of Helium-Carbon Sorption Cryocoolers

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Ahart, M.; Bhandari, P.; Wade, L. A.; Paine, C. G.

    2000-01-01

    We have developed models for predicting the performance of helium-based Joule-Thomson continuous-flow cryocoolers using charcoal-pumped sorption compressors. The models take as inputs the number of compressors, desired heat-lift, cold tip temperature, and available precooling temperature and provide design parameters as outputs. Future laboratory development will be used to verify and improve the models. We will present a preliminary design for a two-stage vibration-free cryocooler that is being proposed as part of a mid-infrared camera on NASA's Next Generation Space Telescope. Model predictions show that a 10 mW helium-carbon cryocooler with a base temperature of 5.5 K will reject less than 650 mW at 18 K. The total input power to the helium-carbon stage is 650 mW. These models, which run in MathCad and Microsoft Excel, can be coupled to similar models for hydrogen sorption coolers to give designs for 2-stage vibration-free cryocoolers that provide cooling from approximately 50 K to 4 K.

  13. Effects of Soil Data and Simulation Unit Resolution on Quantifying Changes of Soil Organic Carbon at Regional Scale with a Biogeochemical Process Model

    PubMed Central

    Zhang, Liming; Yu, Dongsheng; Shi, Xuezheng; Xu, Shengxiang; Xing, Shihe; Zhao, Yongcong

    2014-01-01

    Soil organic carbon (SOC) models were often applied to regions with high heterogeneity, but limited spatially differentiated soil information and simulation unit resolution. This study, carried out in the Tai-Lake region of China, defined the uncertainty derived from application of the DeNitrification-DeComposition (DNDC) biogeochemical model in an area with heterogeneous soil properties and different simulation units. Three different resolution soil attribute databases, a polygonal capture of mapping units at 1∶50,000 (P5), a county-based database of 1∶50,000 (C5) and county-based database of 1∶14,000,000 (C14), were used as inputs for regional DNDC simulation. The P5 and C5 databases were combined with the 1∶50,000 digital soil map, which is the most detailed soil database for the Tai-Lake region. The C14 database was combined with 1∶14,000,000 digital soil map, which is a coarse database and is often used for modeling at a national or regional scale in China. The soil polygons of P5 database and county boundaries of C5 and C14 databases were used as basic simulation units. Results project that from 1982 to 2000, total SOC change in the top layer (0–30 cm) of the 2.3 M ha of paddy soil in the Tai-Lake region was +1.48 Tg C, −3.99 Tg C and −15.38 Tg C based on P5, C5 and C14 databases, respectively. With the total SOC change as modeled with P5 inputs as the baseline, which is the advantages of using detailed, polygon-based soil dataset, the relative deviation of C5 and C14 were 368% and 1126%, respectively. The comparison illustrates that DNDC simulation is strongly influenced by choice of fundamental geographic resolution as well as input soil attribute detail. The results also indicate that improving the framework of DNDC is essential in creating accurate models of the soil carbon cycle. PMID:24523922

  14. Quantification of 11C-Laniquidar Kinetics in the Brain.

    PubMed

    Froklage, Femke E; Boellaard, Ronald; Bakker, Esther; Hendrikse, N Harry; Reijneveld, Jaap C; Schuit, Robert C; Windhorst, Albert D; Schober, Patrick; van Berckel, Bart N M; Lammertsma, Adriaan A; Postnov, Andrey

    2015-11-01

    Overexpression of the multidrug efflux transport P-glycoprotein may play an important role in pharmacoresistance. (11)C-laniquidar is a newly developed tracer of P-glycoprotein expression. The aim of this study was to develop a pharmacokinetic model for quantification of (11)C-laniquidar uptake and to assess its test-retest variability. Two (test-retest) dynamic (11)C-laniquidar PET scans were obtained in 8 healthy subjects. Plasma input functions were obtained using online arterial blood sampling with metabolite corrections derived from manual samples. Coregistered T1 MR images were used for region-of-interest definition. Time-activity curves were analyzed using various plasma input compartmental models. (11)C-laniquidar was metabolized rapidly, with a parent plasma fraction of 50% at 10 min after tracer injection. In addition, the first-pass extraction of (11)C-laniquidar was low. (11)C-laniquidar time-activity curves were best fitted to an irreversible single-tissue compartment (1T1K) model using conventional models. Nevertheless, significantly better fits were obtained using 2 parallel single-tissue compartments, one for parent tracer and the other for labeled metabolites (dual-input model). Robust K1 results were also obtained by fitting the first 5 min of PET data to the 1T1K model, at least when 60-min plasma input data were used. For both models, the test-retest variability of (11)C-laniquidar rate constant for transfer from arterial plasma to tissue (K1) was approximately 19%. The accurate quantification of (11)C-laniquidar kinetics in the brain is hampered by its fast metabolism and the likelihood that labeled metabolites enter the brain. Best fits for the entire 60 min of data were obtained using a dual-input model, accounting for uptake of (11)C-laniquidar and its labeled metabolites. Alternatively, K1 could be obtained from a 5-min scan using a standard 1T1K model. In both cases, the test-retest variability of K1 was approximately 19%. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. The Effect of Tongue Exercise on Serotonergic Input to the Hypoglossal Nucleus in Young and Old Rats

    ERIC Educational Resources Information Center

    Behan, Mary; Moeser, Adam E.; Thomas, Cathy F.; Russell, John A.; Wang, Hao; Leverson, Glen E.; Connor, Nadine P.

    2012-01-01

    Purpose: Breathing and swallowing problems affect elderly people and may be related to age-associated tongue dysfunction. Hypoglossal motoneurons that innervate the tongue receive a robust, excitatory serotonergic (5HT) input and may be affected by aging. We used a rat model of aging and progressive resistance tongue exercise to determine whether…

  16. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  17. DYNAMIC MODELING STRATEGY FOR FLOW REGIME TRANSITION IN GAS-LIQUID TWO-PHASE FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    X. Wang; X. Sun; H. Zhao

    In modeling gas-liquid two-phase flows, the concept of flow regime has been used to characterize the global interfacial structure of the flows. Nearly all constitutive relations that provide closures to the interfacial transfers in two-phase flow models, such as the two-fluid model, are often flow regime dependent. Currently, the determination of the flow regimes is primarily based on flow regime maps or transition criteria, which are developed for steady-state, fully-developed flows and widely applied in nuclear reactor system safety analysis codes, such as RELAP5. As two-phase flows are observed to be dynamic in nature (fully-developed two-phase flows generally do notmore » exist in real applications), it is of importance to model the flow regime transition dynamically for more accurate predictions of two-phase flows. The present work aims to develop a dynamic modeling strategy for determining flow regimes in gas-liquid two-phase flows through the introduction of interfacial area transport equations (IATEs) within the framework of a two-fluid model. The IATE is a transport equation that models the interfacial area concentration by considering the creation and destruction of the interfacial area, such as the fluid particle (bubble or liquid droplet) disintegration, boiling and evaporation; and fluid particle coalescence and condensation, respectively. For the flow regimes beyond bubbly flows, a two-group IATE has been proposed, in which bubbles are divided into two groups based on their size and shape (which are correlated), namely small bubbles and large bubbles. A preliminary approach to dynamically identifying the flow regimes is provided, in which discriminators are based on the predicted information, such as the void fraction and interfacial area concentration of small bubble and large bubble groups. This method is expected to be applied to computer codes to improve their predictive capabilities of gas-liquid two-phase flows, in particular for the applications in which flow regime transition occurs.« less

  18. Derivation of ecological criteria for copper in land-applied biosolids and biosolid-amended agricultural soils.

    PubMed

    Lu, Tao; Li, Jumei; Wang, Xiaoqing; Ma, Yibing; Smolders, Erik; Zhu, Nanwen

    2016-12-01

    The difference in availability between soil metals added via biosolids and soluble salts was not taken into account in deriving the current land-applied biosolids standards. In the present study, a biosolids availability factor (BAF) approach was adopted to investigate the ecological thresholds for copper (Cu) in land-applied biosolids and biosolid-amended agricultural soils. First, the soil property-specific values of HC5 add (the added hazardous concentration for 5% of species) for Cu 2+ salt amended were collected with due attention to data for organisms and soils relevant to China. Second, a BAF representing the difference in availability between soil Cu added via biosolids and soluble salts was estimated based on long-term biosolid-amended soils, including soils from China. Third, biosolids Cu HC5 input values (the input hazardous concentration for 5% of species of Cu from biosolids to soil) as a function of soil properties were derived using the BAF approach. The average potential availability of Cu in agricultural soils amended with biosolids accounted for 53% of that for the same soils spiked with same amount of soluble Cu salts and with a similar aging time. The cation exchange capacity was the main factor affecting the biosolids Cu HC5 input values, while soil pH and organic carbon only explained 24.2 and 1.5% of the variation, respectively. The biosolids Cu HC5 input values can be accurately predicted by regression models developed based on 2-3 soil properties with coefficients of determination (R 2 ) of 0.889 and 0.945. Compared with model predicted biosolids Cu HC5 input values, current standards (GB4284-84) are most likely to be less protective in acidic and neutral soil, but conservative in alkaline non-calcareous soil. Recommendations on ecological criteria for Cu in land-applied biosolids and biosolid-amended agriculture soils may be helpful to fill the gaps existing between science and regulations, and can be useful for Cu risk assessments in soils amended with biosolids. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Impact of input data (in)accuracy on overestimation of visible area in digital viewshed models

    PubMed Central

    Klouček, Tomáš; Šímová, Petra

    2018-01-01

    Viewshed analysis is a GIS tool in standard use for more than two decades to perform numerous scientific and practical tasks. The reliability of the resulting viewshed model depends on the computational algorithm and the quality of the input digital surface model (DSM). Although many studies have dealt with improving viewshed algorithms, only a few studies have focused on the effect of the spatial accuracy of input data. Here, we compare simple binary viewshed models based on DSMs having varying levels of detail with viewshed models created using LiDAR DSM. The compared DSMs were calculated as the sums of digital terrain models (DTMs) and layers of forests and buildings with expertly assigned heights. Both elevation data and the visibility obstacle layers were prepared using digital vector maps differing in scale (1:5,000, 1:25,000, and 1:500,000) as well as using a combination of a LiDAR DTM with objects vectorized on an orthophotomap. All analyses were performed for 104 sample locations of 5 km2, covering areas from lowlands to mountains and including farmlands as well as afforested landscapes. We worked with two observer point heights, the first (1.8 m) simulating observation by a person standing on the ground and the second (80 m) as observation from high structures such as wind turbines, and with five estimates of forest heights (15, 20, 25, 30, and 35 m). At all height estimations, all of the vector-based DSMs used resulted in overestimations of visible areas considerably greater than those from the LiDAR DSM. In comparison to the effect from input data scale, the effect from object height estimation was shown to be secondary. PMID:29844982

  20. Impact of input data (in)accuracy on overestimation of visible area in digital viewshed models.

    PubMed

    Lagner, Ondřej; Klouček, Tomáš; Šímová, Petra

    2018-01-01

    Viewshed analysis is a GIS tool in standard use for more than two decades to perform numerous scientific and practical tasks. The reliability of the resulting viewshed model depends on the computational algorithm and the quality of the input digital surface model (DSM). Although many studies have dealt with improving viewshed algorithms, only a few studies have focused on the effect of the spatial accuracy of input data. Here, we compare simple binary viewshed models based on DSMs having varying levels of detail with viewshed models created using LiDAR DSM. The compared DSMs were calculated as the sums of digital terrain models (DTMs) and layers of forests and buildings with expertly assigned heights. Both elevation data and the visibility obstacle layers were prepared using digital vector maps differing in scale (1:5,000, 1:25,000, and 1:500,000) as well as using a combination of a LiDAR DTM with objects vectorized on an orthophotomap. All analyses were performed for 104 sample locations of 5 km 2 , covering areas from lowlands to mountains and including farmlands as well as afforested landscapes. We worked with two observer point heights, the first (1.8 m) simulating observation by a person standing on the ground and the second (80 m) as observation from high structures such as wind turbines, and with five estimates of forest heights (15, 20, 25, 30, and 35 m). At all height estimations, all of the vector-based DSMs used resulted in overestimations of visible areas considerably greater than those from the LiDAR DSM. In comparison to the effect from input data scale, the effect from object height estimation was shown to be secondary.

  1. Observation and Modeling of Tsunami-Generated Gravity Waves in the Earth’s Upper Atmosphere

    DTIC Science & Technology

    2015-10-08

    Observation and modeling of tsunami -generated gravity waves in the earth’s upper atmosphere 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...ABSTRACT Build a compatible set of models which 1) calculate the spectrum of atmospheric GWs excited by a tsunami (using ocean model data as input...for public release; distribution is unlimited. Observation and modeling of tsunami -generated gravity waves in the earth’s upper atmosphere Sharon

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew

    'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less

  3. Development of Fuel Shuffling Module for PHISICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan Mabe; Andrea Alfonsi; Cristian Rabiti

    2013-06-01

    PHISICS (Parallel and Highly Innovative Simulation for the INL Code System) [4] code toolkit has been in development at the Idaho National Laboratory. This package is intended to provide a modern analysis tool for reactor physics investigation. It is designed with the mindset to maximize accuracy for a given availability of computational resources and to give state of the art tools to the modern nuclear engineer. This is obtained by implementing several different algorithms and meshing approaches among which the user will be able to choose, in order to optimize his computational resources and accuracy needs. The software is completelymore » modular in order to simplify the independent development of modules by different teams and future maintenance. The package is coupled with the thermo-hydraulic code RELAP5-3D [3]. In the following the structure of the different PHISICS modules is briefly recalled, focusing on the new shuffling module (SHUFFLE), object of this paper.« less

  4. Fast Inference with Min-Sum Matrix Product.

    PubMed

    Felzenszwalb, Pedro F; McAuley, Julian J

    2011-12-01

    The MAP inference problem in many graphical models can be solved efficiently using a fast algorithm for computing min-sum products of n × n matrices. The class of models in question includes cyclic and skip-chain models that arise in many applications. Although the worst-case complexity of the min-sum product operation is not known to be much better than O(n(3)), an O(n(2.5)) expected time algorithm was recently given, subject to some constraints on the input matrices. In this paper, we give an algorithm that runs in O(n(2) log n) expected time, assuming that the entries in the input matrices are independent samples from a uniform distribution. We also show that two variants of our algorithm are quite fast for inputs that arise in several applications. This leads to significant performance gains over previous methods in applications within computer vision and natural language processing.

  5. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study

    PubMed Central

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-01-01

    Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598

  6. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    PubMed

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.

  7. WE-FG-206-06: Dual-Input Tracer Kinetic Modeling and Its Analog Implementation for Dynamic Contrast-Enhanced (DCE-) MRI of Malignant Mesothelioma (MPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Rimner, A; Hayes, S

    Purpose: To use dual-input tracer kinetic modeling of the lung for mapping spatial heterogeneity of various kinetic parameters in malignant MPM Methods: Six MPM patients received DCE-MRI as part of their radiation therapy simulation scan. 5 patients had the epitheloid subtype of MPM, while one was biphasic. A 3D fast-field echo sequence with TR/TE/Flip angle of 3.62ms/1.69ms/15° was used for DCE-MRI acquisition. The scan was collected for 5 minutes with a temporal resolution of 5-9 seconds depending on the spatial extent of the tumor. A principal component analysis-based groupwise deformable registration was used to co-register all the DCE-MRI series formore » motion compensation. All the images were analyzed using five different dual-input tracer kinetic models implemented in analog continuous-time formalism: the Tofts-Kety (TK), extended TK (ETK), two compartment exchange (2CX), adiabatic approximation to the tissue homogeneity (AATH), and distributed parameter (DP) models. The following parameters were computed for each model: total blood flow (BF), pulmonary flow fraction (γ), pulmonary blood flow (BF-pa), systemic blood flow (BF-a), blood volume (BV), mean transit time (MTT), permeability-surface area product (PS), fractional interstitial volume (vi), extraction fraction (E), volume transfer constant (Ktrans) and efflux rate constant (kep). Results: Although the majority of patients had epitheloid histologies, kinetic parameter values varied across different models. One patient showed a higher total BF value in all models among the epitheloid histologies, although the γ value was varying among these different models. In one tumor with a large area of necrosis, the TK and ETK models showed higher E, Ktrans, and kep values and lower interstitial volume as compared to AATH and DP and 2CX models. Kinetic parameters such as BF-pa, BF-a, PS, Ktrans values were higher in surviving group compared to non-surviving group across most models. Conclusion: Dual-input tracer kinetic modeling is feasible in determining micro-vascular characteristics of MPM. This project was supported from Cycle for Survival and MSK Imaging and radiation science (IMRAS) grants.« less

  8. Suitability of [18F]altanserin and PET to determine 5-HT2A receptor availability in the rat brain: in vivo and in vitro validation of invasive and non-invasive kinetic models.

    PubMed

    Kroll, Tina; Elmenhorst, David; Matusch, Andreas; Wedekind, Franziska; Weisshaupt, Angela; Beer, Simone; Bauer, Andreas

    2013-08-01

    While the selective 5-hydroxytryptamine type 2a receptor (5-HT2AR) radiotracer [18F]altanserin is well established in humans, the present study evaluated its suitability for quantifying cerebral 5-HT2ARs with positron emission tomography (PET) in albino rats. Ten Sprague Dawley rats underwent 180 min PET scans with arterial blood sampling. Reference tissue methods were evaluated on the basis of invasive kinetic models with metabolite-corrected arterial input functions. In vivo 5-HT2AR quantification with PET was validated by in vitro autoradiographic saturation experiments in the same animals. Overall brain uptake of [18F]altanserin was reliably quantified by invasive and non-invasive models with the cerebellum as reference region shown by linear correlation of outcome parameters. Unlike in humans, no lipophilic metabolites occurred so that brain activity derived solely from parent compound. PET data correlated very well with in vitro autoradiographic data of the same animals. [18F]Altanserin PET is a reliable tool for in vivo quantification of 5-HT2AR availability in albino rats. Models based on both blood input and reference tissue describe radiotracer kinetics adequately. Low cerebral tracer uptake might, however, cause restrictions in experimental usage.

  9. Exploring the Impact of Different Input Data Types on Soil Variable Estimation Using the ICRAF-ISRIC Global Soil Spectral Database.

    PubMed

    Aitkenhead, Matt J; Black, Helaina I J

    2018-02-01

    Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2  > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.

  10. Web interface for Brownian dynamics simulation of ion transport and its applications to beta-barrel pores.

    PubMed

    Lee, Kyu Il; Jo, Sunhwan; Rui, Huan; Egwolf, Bernhard; Roux, Benoît; Pastor, Richard W; Im, Wonpil

    2012-01-30

    Brownian dynamics (BD) based on accurate potential of mean force is an efficient and accurate method for simulating ion transport through wide ion channels. Here, a web-based graphical user interface (GUI) is presented for carrying out grand canonical Monte Carlo (GCMC) BD simulations of channel proteins: http://www.charmm-gui.org/input/gcmcbd. The webserver is designed to help users avoid most of the technical difficulties and issues encountered in setting up and simulating complex pore systems. GCMC/BD simulation results for three proteins, the voltage dependent anion channel (VDAC), α-Hemolysin (α-HL), and the protective antigen pore of the anthrax toxin (PA), are presented to illustrate the system setup, input preparation, and typical output (conductance, ion density profile, ion selectivity, and ion asymmetry). Two models for the input diffusion constants for potassium and chloride ions in the pore are compared: scaling of the bulk diffusion constants by 0.5, as deduced from previous all-atom molecular dynamics simulations of VDAC, and a hydrodynamics based model (HD) of diffusion through a tube. The HD model yields excellent agreement with experimental conductances for VDAC and α-HL, while scaling bulk diffusion constants by 0.5 leads to underestimates of 10-20%. For PA, simulated ion conduction values overestimate experimental values by a factor of 1.5-7 (depending on His protonation state and the transmembrane potential), implying that the currently available computational model of this protein requires further structural refinement. Copyright © 2011 Wiley Periodicals, Inc.

  11. Web Interface for Brownian Dynamics Simulation of Ion Transport and Its Applications to Beta-Barrel Pores

    PubMed Central

    Lee, Kyu Il; Jo, Sunhwan; Rui, Huan; Egwolf, Bernhard; Roux, Benoît; Pastor, Richard W.; Im, Wonpil

    2011-01-01

    Brownian dynamics (BD) in a suitably constructed potential of mean force is an efficient and accurate method for simulating ion transport through wide ion channels. Here, a web-based graphical user interface (GUI) is presented for grand canonical Monte Carlo (GCMC) BD simulations of channel proteins: http://www.charmm-gui.org/input/gcmcbd. The webserver is designed to help users avoid most of the technical difficulties and issues encountered in setting up and simulating complex pore systems. GCMC/BD simulation results for three proteins, the voltage dependent anion channel (VDAC), α-Hemolysin, and the protective antigen pore of the anthrax toxin (PA), are presented to illustrate system setup, input preparation, and typical output (conductance, ion density profile, ion selectivity, and ion asymmetry). Two models for the input diffusion constants for potassium and chloride ions in the pore are compared: scaling of the bulk diffusion constants by 0.5, as deduced from previous all-atom molecular dynamics simulations of VDAC; and a hydrodynamics based model (HD) of diffusion through a tube. The HD model yields excellent agreement with experimental conductances for VDAC and α-Hemolysin, while scaling bulk diffusion constants by 0.5 leads to underestimates of 10–20%. For PA, simulated ion conduction values overestimate experimental values by a factor of 1.5 to 7 (depending on His protonation state and the transmembrane potential), implying that the currently available computational model of this protein requires further structural refinement. PMID:22102176

  12. The economic effect of a physician assistant or nurse practitioner in rural America.

    PubMed

    Eilrich, Fred C

    2016-10-01

    Revenues generated by physician assistants (PAs) and NPs in clinics and hospitals create employment opportunities and wages, salaries, and benefits for staff, which in turn are circulated throughout the local economy. An input-output model was used to estimate the direct and secondary effects of a rural primary care PA or NP on the community and surrounding area. This type of model explains how input/output from one sector of industry can be the output/input for another sector. Given two example scenarios, a rural PA or NP can have an employment effect of 4.4 local jobs and labor income of $280,476 from the clinic. The total effect to a community with a hospital increases to 18.5 local jobs and $940,892 of labor income.

  13. Finite element analysis of the femur during stance phase of gait based on musculoskeletal model simulation.

    PubMed

    Seo, Jeong-Woo; Kang, Dong-Won; Kim, Ju-Young; Yang, Seung-Tae; Kim, Dae-Hyeok; Choi, Jin-Seung; Tack, Gye-Rae

    2014-01-01

    In this study, the accuracy of the inputs required for finite element analysis, which is mainly used for the biomechanical analysis of bones, was improved. To ensure a muscle force and joint contact force similar to the actual values, a musculoskeletal model that was based on the actual gait experiment was used. Gait data were obtained from a healthy male adult aged 29 who had no history of musculoskeletal disease and walked normally (171 cm height and 72 kg weight), and were used as inputs for the musculoskeletal model simulation to determine the muscle force and joint contact force. Among the phases of gait, which is the most common activity in daily life, the stance phase is the most affected by the load. The results data were extracted from five events in the stance phase: heel contact (ST1), loading response (ST2), early mid-stance (ST2), late mid-stance (ST4), and terminal stance (ST5). The results were used as the inputs for the finite element model that was formed using 1.5mm intervals computed tomography (CT) images and the maximum Von-Mises stress and the maximum Von-Mises strain of the right femur were examined. The maximum stress and strain were lowest at the ST4. The maximum values for the femur occurred in the medial part and then in the lateral part after the mid-stance. In this study, the results of the musculoskeletal model simulation using the inverse-dynamic analysis were utilized to improve the accuracy of the inputs, which affected the finite element analysis results, and the possibility of the bone-specific analysis according to the lapse of time was examined.

  14. Ising formulation of associative memory models and quantum annealing recall

    NASA Astrophysics Data System (ADS)

    Santra, Siddhartha; Shehab, Omar; Balu, Radhakrishnan

    2017-12-01

    Associative memory models, in theoretical neuro- and computer sciences, can generally store at most a linear number of memories. Recalling memories in these models can be understood as retrieval of the energy minimizing configuration of classical Ising spins, closest in Hamming distance to an imperfect input memory, where the energy landscape is determined by the set of stored memories. We present an Ising formulation for associative memory models and consider the problem of memory recall using quantum annealing. We show that allowing for input-dependent energy landscapes allows storage of up to an exponential number of memories (in terms of the number of neurons). Further, we show how quantum annealing may naturally be used for recall tasks in such input-dependent energy landscapes, although the recall time may increase with the number of stored memories. Theoretically, we obtain the radius of attractor basins R (N ) and the capacity C (N ) of such a scheme and their tradeoffs. Our calculations establish that for randomly chosen memories the capacity of our model using the Hebbian learning rule as a function of problem size can be expressed as C (N ) =O (eC1N) , C1≥0 , and succeeds on randomly chosen memory sets with a probability of (1 -e-C2N) , C2≥0 with C1+C2=(0.5-f ) 2/(1 -f ) , where f =R (N )/N , 0 ≤f ≤0.5 , is the radius of attraction in terms of the Hamming distance of an input probe from a stored memory as a fraction of the problem size. We demonstrate the application of this scheme on a programmable quantum annealing device, the D-wave processor.

  15. African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption

    NASA Astrophysics Data System (ADS)

    van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep

    2014-05-01

    The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.

  16. Benchmark Simulation of Natural Circulation Cooling System with Salt Working Fluid Using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, K. K.; Scarlat, R. O.; Hu, R.

    Liquid salt-cooled reactors, such as the Fluoride Salt-Cooled High-Temperature Reactor (FHR), offer passive decay heat removal through natural circulation using Direct Reactor Auxiliary Cooling System (DRACS) loops. The behavior of such systems should be well-understood through performance analysis. The advanced system thermal-hydraulics tool System Analysis Module (SAM) from Argonne National Laboratory has been selected for this purpose. The work presented here is part of a larger study in which SAM modeling capabilities are being enhanced for the system analyses of FHR or Molten Salt Reactors (MSR). Liquid salt thermophysical properties have been implemented in SAM, as well as properties ofmore » Dowtherm A, which is used as a simulant fluid for scaled experiments, for future code validation studies. Additional physics modules to represent phenomena specific to salt-cooled reactors, such as freezing of coolant, are being implemented in SAM. This study presents a useful first benchmark for the applicability of SAM to liquid salt-cooled reactors: it provides steady-state and transient comparisons for a salt reactor system. A RELAP5-3D model of the Mark-1 Pebble-Bed FHR (Mk1 PB-FHR), and in particular its DRACS loop for emergency heat removal, provides steady state and transient results for flow rates and temperatures in the system that are used here for code-to-code comparison with SAM. The transient studied is a loss of forced circulation with SCRAM event. To the knowledge of the authors, this is the first application of SAM to FHR or any other molten salt reactors. While building these models in SAM, any gaps in the code’s capability to simulate such systems are identified and addressed immediately, or listed as future improvements to the code.« less

  17. Modeling Transients and Designing a Passive Safety System for a Nuclear Thermal Rocket Using Relap5

    NASA Astrophysics Data System (ADS)

    Khatry, Jivan

    Long-term high payload missions necessitate the need for nuclear space propulsion. Several nuclear reactor types were investigated by the Nuclear Engine for Rocket Vehicle Application (NERVA) program of National Aeronautics and Space Administration (NASA). Study of planned/unplanned transients on nuclear thermal rockets is important due to the need for long-term missions. A NERVA design known as the Pewee I was selected for this purpose. The following transients were run: (i) modeling of corrosion-induced blockages on the peripheral fuel element coolant channels and their impact on radiation heat transfer in the core, and (ii) modeling of loss-of-flow-accidents (LOFAs) and their impact on radiation heat transfer in the core. For part (i), the radiation heat transfer rate of blocked channels increases while their neighbors' decreases. For part (ii), the core radiation heat transfer rate increases while the flow rate through the rocket system is decreased. However, the radiation heat transfer decreased while there was a complete LOFA. In this situation, the peripheral fuel element coolant channels handle the majority of the radiation heat transfer. Recognizing the LOFA as the most severe design basis accident, a passive safety system was designed in order to respond to such a transient. This design utilizes the already existing tie rod tubes and connects them to a radiator in a closed loop. Hence, this is basically a secondary loop. The size of the core is unchanged. During normal steady-state operation, this secondary loop keeps the moderator cool. Results show that the safety system is able to remove the decay heat and prevent the fuel elements from melting, in response to a LOFA and subsequent SCRAM.

  18. Ability of crime, demographic and business data to forecast areas of increased violence.

    PubMed

    Bowen, Daniel A; Mercer Kollar, Laura M; Wu, Daniel T; Fraser, David A; Flood, Charles E; Moore, Jasmine C; Mays, Elizabeth W; Sumner, Steven A

    2018-05-24

    Identifying geographic areas and time periods of increased violence is of considerable importance in prevention planning. This study compared the performance of multiple data sources to prospectively forecast areas of increased interpersonal violence. We used 2011-2014 data from a large metropolitan county on interpersonal violence (homicide, assault, rape and robbery) and forecasted violence at the level of census block-groups and over a one-month moving time window. Inputs to a Random Forest model included historical crime records from the police department, demographic data from the US Census Bureau, and administrative data on licensed businesses. Among 279 block groups, a model utilizing all data sources was found to prospectively improve the identification of the top 5% most violent block-group months (positive predictive value = 52.1%; negative predictive value = 97.5%; sensitivity = 43.4%; specificity = 98.2%). Predictive modelling with simple inputs can help communities more efficiently focus violence prevention resources geographically.

  19. RNA signal amplifier circuit with integrated fluorescence output.

    PubMed

    Akter, Farhima; Yokobayashi, Yohei

    2015-05-15

    We designed an in vitro signal amplification circuit that takes a short RNA input that catalytically activates the Spinach RNA aptamer to produce a fluorescent output. The circuit consists of three RNA strands: an internally blocked Spinach aptamer, a fuel strand, and an input strand (catalyst), as well as the Spinach aptamer ligand 3,5-difluoro-4-hydroxylbenzylidene imidazolinone (DFHBI). The input strand initially displaces the internal inhibitory strand to activate the fluorescent aptamer while exposing a toehold to which the fuel strand can bind to further displace and recycle the input strand. Under a favorable condition, one input strand was able to activate up to five molecules of the internally blocked Spinach aptamer in 185 min at 30 °C. The simple RNA circuit reported here serves as a model for catalytic activation of arbitrary RNA effectors by chemical triggers.

  20. Modeling the transport of nitrogen in an NPP-2006 reactor circuit

    NASA Astrophysics Data System (ADS)

    Stepanov, O. E.; Galkin, I. Yu.; Sledkov, R. M.; Melekh, S. S.; Strebnev, N. A.

    2016-07-01

    Efficient radiation protection of the public and personnel requires detecting an accident-initiating event quickly. Specifically, if a heat-exchange tube in a steam generator is ruptured, the 16N radioactive nitrogen isotope, which contributes to a sharp increase in the steam activity before the turbine, may serve as the signaling component. This isotope is produced in the core coolant and is transported along the circulation circuit. The aim of the present study was to model the transport of 16N in the primary and the secondary circuits of a VVER-1000 reactor facility (RF) under nominal operation conditions. KORSAR/GP and RELAP5/Mod.3.2 codes were used to perform the calculations. Computational models incorporating the major components of the primary and the secondary circuits of an NPP-2006 RF were constructed. These computational models were subjected to cross-verification, and the calculation results were compared to the experimental data on the distribution of the void fraction over the steam generator height. The models were proven to be valid. It was found that the time of nitrogen transport from the core to the heat-exchange tube leak was no longer than 1 s under RF operation at a power level of 100% N nom with all primary circuit pumps activated. The time of nitrogen transport from the leak to the γ-radiation detection unit under the same operating conditions was no longer than 9 s, and the nitrogen concentration in steam was no less than 1.4% (by mass) of its concentration at the reactor outlet. These values were obtained using conservative approaches to estimating the leak flow and the transport time, but the radioactive decay of nitrogen was not taken into account. Further research concerned with the calculation of thermohydraulic processes should be focused on modeling the transport of nitrogen under RF operation with some primary circuit pumps deactivated.

  1. Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.

    2006-07-01

    The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less

  2. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Energy Input Flux in the Global Quiet-Sun Corona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mac Cormack, Cecilia; Vásquez, Alberto M.; López Fuentes, Marcelo

    We present first results of a novel technique that provides, for the first time, constraints on the energy input flux at the coronal base ( r ∼ 1.025 R {sub ⊙}) of the quiet Sun at a global scale. By combining differential emission measure tomography of EUV images, with global models of the coronal magnetic field, we estimate the energy input flux at the coronal base that is required to maintain thermodynamically stable structures. The technique is described in detail and first applied to data provided by the Extreme Ultraviolet Imager instrument, on board the Solar TErrestrial RElations Observatory mission,more » and the Atmospheric Imaging Assembly instrument, on board the Solar Dynamics Observatory mission, for two solar rotations with different levels of activity. Our analysis indicates that the typical energy input flux at the coronal base of magnetic loops in the quiet Sun is in the range ∼0.5–2.0 × 10{sup 5} (erg s{sup −1} cm{sup −2}), depending on the structure size and level of activity. A large fraction of this energy input, or even its totality, could be accounted for by Alfvén waves, as shown by recent independent observational estimates derived from determinations of the non-thermal broadening of spectral lines in the coronal base of quiet-Sun regions. This new tomography product will be useful for the validation of coronal heating models in magnetohydrodinamic simulations of the global corona.« less

  4. Limitations of JEDI Models | Jobs and Economic Development Impact Models |

    Science.gov Websites

    precise forecast. The Jobs and Economic Development Impact (JEDI) models are input-output based models for assessing economic impacts and jobs, including JEDI (see Chapter 5, pp. 136-142). The most not reflect many other economic impacts that could affect real-world impacts on jobs from the project

  5. Model input and output files for the simulation of time of arrival of landfill leachate at the water table, Municipal Solid Waste Landfill Facility, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso County, Texas

    USGS Publications Warehouse

    Abeyta, Cynthia G.; Frenzel, Peter F.

    1999-01-01

    This report contains listings of model input and output files for the simulation of the time of arrival of landfill leachate at the water table from the Municipal Solid Waste Landfill Facility (MSWLF), about 10 miles northeast of downtown El Paso, Texas. This simulation was done by the U.S. Geological Survey in cooperation with the U.S. Department of the Army, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso, Texas. The U.S. Environmental Protection Agency-developed Hydrologic Evaluation of Landfill Performance (HELP) and Multimedia Exposure Assessment (MULTIMED) computer models were used to simulate the production of leachate by a landfill and transport of landfill leachate to the water table. Model input data files used with and output files generated by the HELP and MULTIMED models are provided in ASCII format on a 3.5-inch 1.44-megabyte IBM-PC compatible floppy disk.

  6. Computational models of O-LM cells are recruited by low or high theta frequency inputs depending on h-channel distributions

    PubMed Central

    Sekulić, Vladislav; Skinner, Frances K

    2017-01-01

    Although biophysical details of inhibitory neurons are becoming known, it is challenging to map these details onto function. Oriens-lacunosum/moleculare (O-LM) cells are inhibitory cells in the hippocampus that gate information flow, firing while phase-locked to theta rhythms. We build on our existing computational model database of O-LM cells to link model with function. We place our models in high-conductance states and modulate inhibitory inputs at a wide range of frequencies. We find preferred spiking recruitment of models at high (4–9 Hz) or low (2–5 Hz) theta depending on, respectively, the presence or absence of h-channels on their dendrites. This also depends on slow delayed-rectifier potassium channels, and preferred theta ranges shift when h-channels are potentiated by cyclic AMP. Our results suggest that O-LM cells can be differentially recruited by frequency-modulated inputs depending on specific channel types and distributions. This work exposes a strategy for understanding how biophysical characteristics contribute to function. DOI: http://dx.doi.org/10.7554/eLife.22962.001 PMID:28318488

  7. Baroclinic stabilization effect of the Atlantic-Arctic water exchange simulated by the eddy-permitting ocean model and global atmosphere-ocean model

    NASA Astrophysics Data System (ADS)

    Moshonkin, Sergey; Bagno, Alexey; Gritsun, Andrey; Gusev, Anatoly

    2017-04-01

    Numerical experiments were performed with the global atmosphere-ocean model INMCM5 (for version of the international project CMIP6, resolution for atmosphere is 2°x1.5°, 21 level) and with the three-dimensional, free surface, sigma coordinate eddy-permitting ocean circulation model for Atlantic (from 30°S) - Arctic and Bering sea domain (0.25 degrees resolution, Institute of Numerical Mathematics Ocean Model or INMOM). Spatial resolution of the INMCM5 oceanic component is 0.5°x0.25°. Both models have 40 s-levels in ocean. Previously, the simulations were carried out for INMCM5 to generate climatic system stable state. Then model was run for 180 years. In the experiment with INMOM, CORE-II data for 1948-2009 were used. As the goal for comparing results of two these numerical models, we selected evolution of the density and velocity anomalies in the 0-300m active ocean layer near Fram Strait in the Greenland Sea, where oceanic cyclonic circulation influences Atlantic-Arctic water exchange. Anomalies were count without climatic seasonal cycle for time scales smaller than 30 years. We use Singular Value Decomposition analysis (SVD) for density-velocity anomalies with time lag from minus one to six months. Both models perform identical stable physical result. They reveal that changes of heat and salt transports by West Spitsbergen and East Greenland currents, caused by atmospheric forcing, produce the baroclinic modes of velocity anomalies in 0-300m layer, thereby stabilizing ocean response on the atmospheric forcing, which stimulates keeping water exchange between the North Atlantic and Arctic Ocean at the certain climatological level. The first SVD-mode of density-velocity anomalies is responsible for the cyclonic circulation variability. The second and third SVD-modes stabilize existing ocean circulation by the anticyclonic vorticity generation. The second and third SVD-modes give 35% of the input to the total dispersion of density anomalies and 16-18% of the input to the total dispersion of velocity anomalies for numerical results as in INMCM5 so in INMOM models. Input to the total dispersion of velocity anomalies for the first SVD-mode is equal to 50% for INMCM5 and only 19% for INMOM. The research was done in the INM RAS. The model INMOM was supported by Russian Foundation for Basic Research (grant №16-05-00534), and the model INMCM was supported by the Russian Scientific Foundation (grant №14-27-00126).

  8. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    EPA Science Inventory

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  9. Evaluation of approaches focused on modelling of organic carbon stocks using the RothC model

    NASA Astrophysics Data System (ADS)

    Koco, Štefan; Skalský, Rastislav; Makovníková, Jarmila; Tarasovičová, Zuzana; Barančíková, Gabriela

    2014-05-01

    The aim of current efforts in the European area is the protection of soil organic matter, which is included in all relevant documents related to the protection of soil. The use of modelling of organic carbon stocks for anticipated climate change, respectively for land management can significantly help in short and long-term forecasting of the state of soil organic matter. RothC model can be applied in the time period of several years to centuries and has been tested in long-term experiments within a large range of soil types and climatic conditions in Europe. For the initialization of the RothC model, knowledge about the carbon pool sizes is essential. Pool size characterization can be obtained from equilibrium model runs, but this approach is time consuming and tedious, especially for larger scale simulations. Due to this complexity we search for new possibilities how to simplify and accelerate this process. The paper presents a comparison of two approaches for SOC stocks modelling in the same area. The modelling has been carried out on the basis of unique input of land use, management and soil data for each simulation unit separately. We modeled 1617 simulation units of 1x1 km grid on the territory of agroclimatic region Žitný ostrov in the southwest of Slovakia. The first approach represents the creation of groups of simulation units based on the evaluation of results for simulation unit with similar input values. The groups were created after the testing and validation of modelling results for individual simulation units with results of modelling the average values of inputs for the whole group. Tests of equilibrium model for interval in the range 5 t.ha-1 from initial SOC stock showed minimal differences in results comparing with result for average value of whole interval. Management inputs data from plant residues and farmyard manure for modelling of carbon turnover were also the same for more simulation units. Combining these groups (intervals of initial SOC stock, groups of plant residues inputs, groups of farmyard manure inputs), we created 661 simulation groups. Within the group, for all simulation units we used average values of inputs. Export of input data and modelling has been carried out manually in the graphic environment of RothC 26.3 v2.0 application for each group separately. SOC stocks were modeled for 661 groups of simulation units. For the second possibility we used RothC 26.3 version for DOS. The inputs for modelling were exported using VBA scripts in the environment of MS Access program. Equilibrium modelling for more variations of plant residues inputs was performed. Subsequently we selected the nearest value of total pool size to the real initial SOC stock value. All simulation units (1617) were automatically modeled by means of the predefined Batch File. The comparison of two methods of modelling showed spatial differentiation of results mainly with the increasing time of modelling period. In the time sequence, from initial period we mark the increasing the number of simulation units with differences in SOC stocks according to selected approaches. Observed differences suggest that the results of modelling obtained by inputs generalization should be taken into account with a certain degree of reserve. At large scales simulations it is more appropriate to use the DOS version of RothC 26.3 model which allows automated modelling. This reduces the time needed for model operation, without the necessity to look for the possibilities of minimizing the simulated units. Key words Soil organic carbon stock, modelling, RothC 26.3, agricultural soils, Slovakia Acknowledgements This work was supported by the Slovak Research and Development Agency under the contract No. APVV-0580-10 and APVV-0131-11.

  10. Finding the Needles in the Haystacks: High-Fidelity Models of the Modern and Archean Solar System for Simulating Exoplanet Observations

    NASA Technical Reports Server (NTRS)

    Roberge, Aki; Rizzo, Maxime J.; Lincowski, Andrew P.; Arney, Giada N.; Stark, Christopher C.; Robinson, Tyler D.; Snyder, Gregory F.; Pueyo, Laurent; Zimmerman, Neil T.; Jansen, Tiffany; hide

    2017-01-01

    We present two state-of-the-art models of the solar system, one corresponding to the present day and one to the Archean Eon 3.5 billion years ago. Each model contains spatial and spectral information for the star, the planets, and the interplanetary dust, extending to 50 au from the Sun and covering the wavelength range 0.3-2.5 micron. In addition, we created a spectral image cube representative of the astronomical backgrounds that will be seen behind deep observations of extrasolar planetary systems, including galaxies and Milky Way stars. These models are intended as inputs to high-fidelity simulations of direct observations of exoplanetary systems using telescopes equipped with high-contrast capability. They will help improve the realism of observation and instrument parameters that are required inputs to statistical observatory yield calculations, as well as guide development of post-processing algorithms for telescopes capable of directly imaging Earth-like planets.

  11. Study of steam condensation at sub-atmospheric pressure: setting a basic research using MELCOR code

    NASA Astrophysics Data System (ADS)

    Manfredini, A.; Mazzini, M.

    2017-11-01

    One of the most serious accidents that can occur in the experimental nuclear fusion reactor ITER is the break of one of the headers of the refrigeration system of the first wall of the Tokamak. This results in water-steam mixture discharge in vacuum vessel (VV), with consequent pressurization of this container. To prevent the pressure in the VV exceeds 150 KPa absolute, a system discharges the steam inside a suppression pool, at an absolute pressure of 4.2 kPa. The computer codes used to analyze such incident (eg. RELAP 5 or MELCOR) are not validated experimentally for such conditions. Therefore, we planned a basic research, in order to have experimental data useful to validate the heat transfer correlations used in these codes. After a thorough literature search on this topic, ACTA, in collaboration with the staff of ITER, defined the experimental matrix and performed the design of the experimental apparatus. For the thermal-hydraulic design of the experiments, we executed a series of calculations by MELCOR. This code, however, was used in an unconventional mode, with the development of models suited respectively to low and high steam flow-rate tests. The article concludes with a discussion of the placement of experimental data within the map featuring the phenomenon characteristics, showing the importance of the new knowledge acquired, particularly in the case of chugging.

  12. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  13. Maternal obesity downregulates myogenesis and beta-catenin signaling in fetal skeletal muscle.

    PubMed

    Tong, Jun F; Yan, Xu; Zhu, Mei J; Ford, Stephen P; Nathanielsz, Peter W; Du, Min

    2009-04-01

    Skeletal muscle is one of the primary tissues responsible for insulin resistance and type 2 diabetes (T2D). The fetal stage is crucial for skeletal muscle development. Obesity induces inflammatory responses, which might regulate myogenesis through Wnt/beta-catenin signaling. This study evaluated the effects of maternal obesity (>30% increase in body mass index) during pregnancy on myogenesis and the Wnt/beta-catenin and IKK/NF-kappaB pathways in fetal skeletal muscle using an obese pregnant sheep model. Nonpregnant ewes were assigned to a control group (C; fed 100% of National Research Council recommendations; n=5) or obesogenic (OB; fed 150% of National Research Council recommendations; n=5) diet from 60 days before to 75 days after conception (term approximately 148 days) when fetal semitendenosus skeletal muscle was sampled for analyses. Myogenic markers including MyoD, myogenin, and desmin contents were reduced in OB compared with C fetal semitendenosus, indicating the downregulation of myogenesis. The diameter of primary muscle fibers was smaller in OB fetal muscle. Phosphorylation of GSK3beta was reduced in OB compared with C fetal semitendenosus. Although the beta-catenin level was lower in OB than C fetal muscle, more beta-catenin was associated with FOXO3a in the OB fetuses. Moreover, we found phosphorylation levels of IKKbeta and RelA/p65 were both increased in OB fetal muscle. In conclusion, our data showed that myogenesis and the Wnt/beta-catenin signaling pathway were downregulated, which might be due to the upregulation of inflammatory IKK/NF-kappaB signaling pathways in fetal muscle of obese mothers.

  14. Accident Analysis for the NIST Research Reactor Before and After Fuel Conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baek J.; Diamond D.; Cuadra, A.

    Postulated accidents have been analyzed for the 20 MW D2O-moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The analysis has been carried out for the present core, which contains high enriched uranium (HEU) fuel and for a proposed equilibrium core with low enriched uranium (LEU) fuel. The analyses employ state-of-the-art calculational methods. Three-dimensional Monte Carlo neutron transport calculations were performed with the MCNPX code to determine homogenized fuel compositions in the lower and upper halves of each fuel element and to determine the resulting neutronic properties of the core. The accident analysis employed a modelmore » of the primary loop with the RELAP5 code. The model includes the primary pumps, shutdown pumps outlet valves, heat exchanger, fuel elements, and flow channels for both the six inner and twenty-four outer fuel elements. Evaluations were performed for the following accidents: (1) control rod withdrawal startup accident, (2) maximum reactivity insertion accident, (3) loss-of-flow accident resulting from loss of electrical power with an assumption of failure of shutdown cooling pumps, (4) loss-of-flow accident resulting from a primary pump seizure, and (5) loss-of-flow accident resulting from inadvertent throttling of a flow control valve. In addition, natural circulation cooling at low power operation was analyzed. The analysis shows that the conversion will not lead to significant changes in the safety analysis and the calculated minimum critical heat flux ratio and maximum clad temperature assure that there is adequate margin to fuel failure.« less

  15. Code Development and Assessment for Reactor Outage Thermal-Hydraulic and Safety Analysis - Midloop Operation with Loss of Residual Heat Removal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Thomas K.S.; Ko, F.-K

    Although only a few percent of residual power remains during plant outages, the associated risk of core uncovery and corresponding fuel overheating has been identified to be relatively high, particularly under midloop operation (MLO) in pressurized water reactors. However, to analyze the system behavior during outages, the tools currently available, such as RELAP5, RETRAN, etc., cannot easily perform the task. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as MLO with the loss of residual heat removal (RHR), was developed. All important thermal-hydraulic processes involved during MLO with the loss of RHR will be properly simulatedmore » by the newly developed reactor outage simulation and evaluation (ROSE) code. Important processes during MLO with loss of RHR involve a pressurizer insurge caused by the hot-leg flooding, reflux condensation, liquid holdup inside the steam generator, loop-seal clearance, core-level depression, etc. Since the accuracy of the pressure distribution from the classical nodal momentum approach will be degraded when the system is stratified and under atmospheric pressure, the two-region approach with a modified two-fluid model will be the theoretical basis of the new program to analyze the nuclear steam supply system during plant outages. To verify the analytical model in the first step, posttest calculations against the closed integral midloop experiments with loss of RHR were performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility (IIST) test data is demonstrated.« less

  16. The evaluation of reproductive health PhD program in Iran: The input indicators analysis.

    PubMed

    AbdiShahshahani, Mahshid; Ehsanpour, Soheila; Yamani, Nikoo; Kohan, Shahnaz

    2014-11-01

    Appropriate quality achievement of a PhD program requires frequent assessment and discovering the shortcomings in the program. Inputs, which are important elements of the curriculum, are frequently missed in evaluations. The purpose of this study was to evaluate the input indicators of reproductive health PhD program in Iran based on the Context, Input, Process, and Product (CIPP) evaluation model. This is a descriptive and evaluative study based on the CIPP evaluation model. It was conducted in 2013 in four Iranian schools of nursing and midwifery of medical sciences universities. Statistical population consisted of four groups: heads of departments (n = 5), faculty members (n = 18), graduates (n = 12), and PhD students of reproductive health (n = 54). Data collection tools were five separate questionnaires including 37 indicators that were developed by the researcher. Content and face validity were evaluated based on the experts' indications. The Cronbach's alpha coefficient was calculated in order to obtain the reliability of the questionnaires. Collected data were analyzed by SPSS software. Data were analyzed by descriptive statistics (mean, frequency, percentage, and standard deviation), and one-way analysis of variance (ANOVA) and least significant difference (LSD) post hoc tests to compare means between groups. The results of the study indicated that the highest percentage of the heads of departments (80%), graduates (66.7%), and students (68.5%) evaluated the status of input indicators of reproductive health PhD program as relatively appropriate, while most of the faculties (66.7%) evaluated that as appropriate. It is suggested to explore the reasons for relatively appropriate evaluation of input indicators by further academic researches and improve the reproductive health PhD program accordingly.

  17. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    PubMed

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.

  18. The Dynamic General Vegetation Model MC1 over the United States and Canada at a 5-arcminute resolution: model inputs and outputs

    Treesearch

    Ray Drapek; John B. Kim; Ronald P. Neilson

    2015-01-01

    Land managers need to include climate change in their decisionmaking, but the climate models that project future climates operate at spatial scales that are too coarse to be of direct use. To create a dataset more useful to managers, soil and historical climate were assembled for the United States and Canada at a 5-arcminute grid resolution. Nine CMIP3 future climate...

  19. Effects of input uncertainty on cross-scale crop modeling

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  20. Storm Water Management Model User’s Manual Version 5.1 - manual

    EPA Science Inventory

    SWMM 5 provides an integrated environment for editing study area input data, running hydrologic, hydraulic and water quality simulations, and viewing the results in a variety of formats. These include color-coded drainage area and conveyance system maps, time series graphs and ta...

  1. Summary of papers on current and anticipated uses of thermal-hydraulic codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caruso, R.

    1997-07-01

    The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less

  2. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    PubMed Central

    Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii

    2015-01-01

    Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113

  3. Yield model development project implementation plan

    NASA Technical Reports Server (NTRS)

    Ambroziak, R. A.

    1982-01-01

    Tasks remaining to be completed are summarized for the following major project elements: (1) evaluation of crop yield models; (2) crop yield model research and development; (3) data acquisition processing, and storage; (4) related yield research: defining spectral and/or remote sensing data requirements; developing input for driving and testing crop growth/yield models; real time testing of wheat plant process models) and (5) project management and support.

  4. Modeling Enclosure Design in Above-Grade Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lstiburek, J.; Ueno, K.; Musunuru, S.

    2016-03-01

    Building Science Corporation modeled typically well-performing wall assemblies using Wärme und Feuchte instationär (WUFI) Version 5.3 software and demonstrated that these models agree with historic experience when calibrated and modeled correctly. This technical report provides a library of WUFI modeling input data and results. Within the limits of existing experience, this information can be generalized for applications to a broad population of houses.

  5. Case studies in Bayesian microbial risk assessments.

    PubMed

    Kennedy, Marc C; Clough, Helen E; Turner, Joanne

    2009-12-21

    The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.

  6. EPA ALPHA Modeling of a Conventional Mid-Size Car with CVT and Comparable Powertrain Technologies (SAE 2016-01-1141)

    EPA Science Inventory

    This paper presents the testing and ALPHA modeling of a CVT-equipped 2013 Nissan Altima 2.5S using comparable powertrain technology inputs in the effort to model the current and future U.S. light-duty vehicle fleet approximated using components with comparable levels of performan...

  7. Applications of Mars Global Reference Atmospheric Model (Mars-GRAM 2005) Supporting Mission Site Selection for Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. One new feature of Mars-GRAM 2005 is the 'auxiliary profile' option. In this option, an input file of temperature and density versus altitude is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5)model) and a global Thermal Emission Spectrometer(TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components,averaged over 5-by-5 degree latitude-longitude bins and 15 degree L(s) bins, for each of three Mars years of TES nadir data. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate Mars Science Laboratory (MSL) landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  8. Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models

    NASA Astrophysics Data System (ADS)

    Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria

    2017-08-01

    In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.

  9. FLUXCOM - Overview and First Synthesis

    NASA Astrophysics Data System (ADS)

    Jung, M.; Ichii, K.; Tramontana, G.; Camps-Valls, G.; Schwalm, C. R.; Papale, D.; Reichstein, M.; Gans, F.; Weber, U.

    2015-12-01

    We present a community effort aiming at generating an ensemble of global gridded flux products by upscaling FLUXNET data using an array of different machine learning methods including regression/model tree ensembles, neural networks, and kernel machines. We produced products for gross primary production, terrestrial ecosystem respiration, net ecosystem exchange, latent heat, sensible heat, and net radiation for two experimental protocols: 1) at a high spatial and 8-daily temporal resolution (5 arc-minute) using only remote sensing based inputs for the MODIS era; 2) 30 year records of daily, 0.5 degree spatial resolution by incorporating meteorological driver data. Within each set-up, all machine learning methods were trained with the same input data for carbon and energy fluxes respectively. Sets of input driver variables were derived using an extensive formal variable selection exercise. The performance of the extrapolation capacities of the approaches is assessed with a fully internally consistent cross-validation. We perform cross-consistency checks of the gridded flux products with independent data streams from atmospheric inversions (NEE), sun-induced fluorescence (GPP), catchment water balances (LE, H), satellite products (Rn), and process-models. We analyze the uncertainties of the gridded flux products and for example provide a breakdown of the uncertainty of mean annual GPP originating from different machine learning methods, different climate input data sets, and different flux partitioning methods. The FLUXCOM archive will provide an unprecedented source of information for water, energy, and carbon cycle studies.

  10. Description and availability of the SMARTS spectral model for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Myers, Daryl R.; Gueymard, Christian A.

    2004-11-01

    Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.

  11. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  12. Implementation of ERDC HEP Geo-Material Model in CTH and Application

    DTIC Science & Technology

    2011-11-02

    used TARDEC JWL inputs for C4 and Johnson- Cook Strength inputs   TARDEC JC fracture model inputs for 5083 plate changed due to problems seen in...fracture inputs from IMD tests -  LS-DYNA C4 JWL and Johnson-Cook strength inputs used in CTH runs -  Results indicate that TARDEC JC fracture model

  13. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  14. Derated ion thruster design issues

    NASA Technical Reports Server (NTRS)

    Patterson, Michael J.; Rawlin, Vincent K.

    1991-01-01

    Preliminary activities to develop and refine a lightweight 30 cm engineering model ion thruster are discussed. The approach is to develop a 'derated' ion thruster capable of performing both auxiliary and primary propulsion roles over an input power range of at least 0.5 to 5.0 kilo-W. Design modifications to a baseline thruster to reduce mass and volume are discussed. Performance data over an order of magnitude input power range are presented, with emphasis on the performance impact of engine throttling. Thruster design modifications to optimize performance over specific power envelopes are discussed. Additionally, lifetime estimates based on wear test measurements are made for the operation envelope of the engine.

  15. Social stress alters inhibitory synaptic input to distinct subpopulations of raphe serotonin neurons.

    PubMed

    Crawford, LaTasha K; Rahman, Shumaia F; Beck, Sheryl G

    2013-01-16

    Anxiety disorders are among the most prevalent psychiatric disorders, yet much is unknown about the underlying mechanisms. The dorsal raphe (DR) is at the crux of the anxiety-inducing effects of uncontrollable stress, a key component of models of anxiety. Though DR serotonin (5-HT) neurons play a prominent role, anxiety-associated changes in the physiology of 5-HT neurons remain poorly understood. A 5-day social defeat model of anxiety produced a multifaceted, anxious phenotype in intruder mice that included increased avoidance behavior in the open field test, increased stress-evoked grooming, and increased bladder and heart weights when compared to control mice. Intruders were further compared to controls using electrophysiology recordings conducted in midbrain slices wherein recordings targeted 5-HT neurons of the ventromedial (vmDR) and lateral wing (lwDR) subfields of the DR. Though defining membrane characteristics of 5-HT neurons were unchanged, γ-aminobutyric-acid-mediated (GABAergic) synaptic regulation of 5-HT neurons was altered in a topographically specific way. In the vmDR of intruders, there was a decrease in the frequency and amplitude of GABAergic spontaneous inhibitory postsynaptic currents (sIPSCs). However, in the lwDR, there was an increase in the strength of inhibitory signals due to slower sIPSC kinetics. Synaptic changes were selective for GABAergic input, as glutamatergic synaptic input was unchanged in intruders. The distinct inhibitory regulation of DR subfields provides a mechanism for increased 5-HT output in vmDR target regions and decreased 5-HT output in lwDR target regions, divergent responses to uncontrollable stress that have been reported in the literature but were previously poorly understood.

  16. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  17. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  18. Effect of Increased Intensity of Physiotherapy on Patient Outcomes After Stroke: An Economic Literature Review and Cost-Effectiveness Analysis

    PubMed Central

    Chan, B

    2015-01-01

    Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241

  19. MODELING THE AMBIENT CONDITION EFFECTS OF AN AIR-COOLED NATURAL CIRCULATION SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Lisowski, Darius D.; Bucknor, Matthew

    The Reactor Cavity Cooling System (RCCS) is a passive safety concept under consideration for the overall safety strategy of advanced reactors such as the High Temperature Gas-Cooled Reactor (HTGR). One such variant, air-cooled RCCS, uses natural convection to drive the flow of air from outside the reactor building to remove decay heat during normal operation and accident scenarios. The Natural convection Shutdown heat removal Test Facility (NSTF) at Argonne National Laboratory (“Argonne”) is a half-scale model of the primary features of one conceptual air-cooled RCCS design. The facility was constructed to carry out highly instrumented experiments to study the performancemore » of the RCCS concept for reactor decay heat removal that relies on natural convection cooling. Parallel modeling and simulation efforts were performed to support the design, operation, and analysis of the natural convection system. Throughout the testing program, strong influences of ambient conditions were observed in the experimental data when baseline tests were repeated under the same test procedures. Thus, significant analysis efforts were devoted to gaining a better understanding of these influences and the subsequent response of the NSTF to ambient conditions. It was determined that air humidity had negligible impacts on NSTF system performance and therefore did not warrant consideration in the models. However, temperature differences between the building exterior and interior air, along with the outside wind speed, were shown to be dominant factors. Combining the stack and wind effects together, an empirical model was developed based on theoretical considerations and using experimental data to correlate zero-power system flow rates with ambient meteorological conditions. Some coefficients in the model were obtained based on best fitting the experimental data. The predictive capability of the empirical model was demonstrated by applying it to the new set of experimental data. The empirical model was also implemented in the computational models of the NSTF using both RELAP5-3D and STARCCM+ codes. Accounting for the effects of ambient conditions, simulations from both codes predicted the natural circulation flow rates very well.« less

  20. Modeling Streamflow and Water Temperature in the North Santiam and Santiam Rivers, Oregon, 2001-02

    USGS Publications Warehouse

    Sullivan, Annett B.; Roundsk, Stewart A.

    2004-01-01

    To support the development of a total maximum daily load (TMDL) for water temperature in the Willamette Basin, the laterally averaged, two-dimensional model CE-QUAL-W2 was used to construct a water temperature and streamflow model of the Santiam and North Santiam Rivers. The rivers were simulated from downstream of Detroit and Big Cliff dams to the confluence with the Willamette River. Inputs to the model included bathymetric data, flow and temperature from dam releases, tributary flow and temperature, and meteorologic data. The model was calibrated for the period July 1 through November 21, 2001, and confirmed with data from April 1 through October 31, 2002. Flow calibration made use of data from two streamflow gages and travel-time and river-width data. Temperature calibration used data from 16 temperature monitoring locations in 2001 and 5 locations in 2002. A sensitivity analysis was completed by independently varying input parameters, including point-source flow, air temperature, flow and water temperature from dam releases, and riparian shading. Scenario analyses considered hypothetical river conditions without anthropogenic heat inputs, with restored riparian vegetation, with minimum streamflow from the dams, and with a more-natural seasonal water temperature regime from dam releases.

  1. Modeling Enclosure Design in Above-Grade Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lstiburek, J.; Ueno, K.; Musunuru, S.

    2016-03-01

    This report describes the modeling of typical wall assemblies that have performed well historically in various climate zones. The WUFI (Warme und Feuchte instationar) software (Version 5.3) model was used. A library of input data and results are provided. The provided information can be generalized for application to a broad population of houses, within the limits of existing experience. The WUFI software model was calibrated or tuned using wall assemblies with historically successful performance. The primary performance criteria or failure criteria establishing historic performance was moisture content of the exterior sheathing. The primary tuning parameters (simulation inputs) were airflow andmore » specifying appropriate material properties. Rational hygric loads were established based on experience - specifically rain wetting and interior moisture (RH levels). The tuning parameters were limited or bounded by published data or experience. The WUFI templates provided with this report supply useful information resources to new or less-experienced users. The files present various custom settings that will help avoid results that will require overly conservative enclosure assemblies. Overall, better material data, consistent initial assumptions, and consistent inputs among practitioners will improve the quality of WUFI modeling, and improve the level of sophistication in the field.« less

  2. Effects of Meteorological Data Quality on Snowpack Modeling

    NASA Astrophysics Data System (ADS)

    Havens, S.; Marks, D. G.; Robertson, M.; Hedrick, A. R.; Johnson, M.

    2017-12-01

    Detailed quality control of meteorological inputs is the most time-intensive component of running the distributed, physically-based iSnobal snow model, and the effect of data quality of the inputs on the model is unknown. The iSnobal model has been run operationally since WY2013, and is currently run in several basins in Idaho and California. The largest amount of user input during modeling is for the quality control of precipitation, temperature, relative humidity, solar radiation, wind speed and wind direction inputs. Precipitation inputs require detailed user input and are crucial to correctly model the snowpack mass. This research applies a range of quality control methods to meteorological input, from raw input with minimal cleaning, to complete user-applied quality control. The meteorological input cleaning generally falls into two categories. The first is global minimum/maximum and missing value correction that could be corrected and/or interpolated with automated processing. The second category is quality control for inputs that are not globally erroneous, yet are still unreasonable and generally indicate malfunctioning measurement equipment, such as temperature or relative humidity that remains constant, or does not correlate with daily trends observed at nearby stations. This research will determine how sensitive model outputs are to different levels of quality control and guide future operational applications.

  3. Model design for predicting extreme precipitation event impacts on water quality in a water supply reservoir

    NASA Astrophysics Data System (ADS)

    Hagemann, M.; Jeznach, L. C.; Park, M. H.; Tobiason, J. E.

    2016-12-01

    Extreme precipitation events such as tropical storms and hurricanes are by their nature rare, yet have disproportionate and adverse effects on surface water quality. In the context of drinking water reservoirs, common concerns of such events include increased erosion and sediment transport and influx of natural organic matter and nutrients. As part of an effort to model the effects of an extreme precipitation event on water quality at the reservoir intake of a major municipal water system, this study sought to estimate extreme-event watershed responses including streamflow and exports of nutrients and organic matter for use as inputs to a 2-D hydrodynamic and water quality reservoir model. Since extreme-event watershed exports are highly uncertain, we characterized and propagated predictive uncertainty using a quasi-Monte Carlo approach to generate reservoir model inputs. Three storm precipitation depths—corresponding to recurrence intervals of 5, 50, and 100 years—were converted to streamflow in each of 9 tributaries by volumetrically scaling 2 storm hydrographs from the historical record. Rating-curve models for concentratoin, calibrated using 10 years of data for each of 5 constituents, were then used to estimate the parameters of a multivariate lognormal probability model of constituent concentrations, conditional on each scenario's storm date and streamflow. A quasi-random Halton sequence (n = 100) was drawn from the conditional distribution for each event scenario, and used to generate input files to a calibrated CE-QUAL-W2 reservoir model. The resulting simulated concentrations at the reservoir's drinking water intake constitute a low-discrepancy sample from the estimated uncertainty space of extreme-event source water-quality. Limiting factors to the suitability of this approach include poorly constrained relationships between hydrology and constituent concentrations, a high-dimensional space from which to generate inputs, and relatively long run-time for the reservoir model. This approach proved useful in probing a water supply's resilience to extreme events, and to inform management responses, particularly in a region such as the American Northeast where climate change is expected to bring such events with higher frequency and intensity than have occurred in the past.

  4. Tolerance and UQ4SIM: Nimble Uncertainty Documentation and Analysis Software

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2008-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and variabilities is a necessary first step toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. The basic premise of uncertainty markup is to craft a tolerance and tagging mini-language that offers a natural, unobtrusive presentation and does not depend on parsing each type of input file format. Each file is marked up with tolerances and optionally, associated tags that serve to label the parameters and their uncertainties. The evolution of such a language, often called a Domain Specific Language or DSL, is given in [1], but in final form it parallels tolerances specified on an engineering drawing, e.g., 1 +/- 0.5, 5 +/- 10%, 2 +/- 10 where % signifies percent and o signifies order of magnitude. Tags, necessary for error propagation, can be added by placing a quotation-mark-delimited tag after the tolerance, e.g., 0.7 +/- 20% 'T_effective'. In addition, tolerances might have different underlying distributions, e.g., Uniform, Normal, or Triangular, or the tolerances may merely be intervals due to lack of knowledge (uncertainty). Finally, to address pragmatic considerations such as older models that require specific number-field formats, C-style format specifiers can be appended to the tolerance like so, 1.35 +/- 10U_3.2f. As an example of use, consider figure 1, where a chemical reaction input file is has been marked up to include tolerances and tags per table 1. Not only does the technique provide a natural method of specifying tolerances, but it also servers as in situ documentation of model uncertainties. This tolerance language comes with a utility to strip the tolerances (and tags), to provide a path to the nominal model parameter file. And, as shown in [1], having the ability to quickly mark and identify model parameter uncertainties facilitates error propagation, which in turn yield output uncertainties.

  5. A 25-Gbps high-sensitivity optical receiver with 10-Gbps photodiode using inductive input coupling for optical interconnects

    NASA Astrophysics Data System (ADS)

    Oku, Hideki; Narita, Kiyomi; Shiraishi, Takashi; Ide, Satoshi; Tanaka, Kazuhiro

    2012-01-01

    A 25-Gbps high-sensitivity optical receiver with a 10-Gbps photodiode (PD) using inductive input coupling has been demonstrated for optical interconnects. We introduced the inductive input coupling technique to achieve the 25-Gbps optical receiver using a 10-Gbps PD. We implemented an input inductor (Lin) between the PD and trans-impedance amplifier (TIA), and optimized inductance to enhance the bandwidth and reduce the input referred noise current through simulation with the RF PD-model. Near the resonance frequency of the tank circuit formed by PD capacitance, Lin, and TIA input capacitance, the PD photo-current through Lin into the TIA is enhanced. This resonance has the effects of enhancing the bandwidth at TIA input and reducing the input equivalent value of the noise current from TIA. We fabricated the 25-Gbps optical receiver with the 10-Gbps PD using an inductive input coupling technique. Due to the application of an inductor, the receiver bandwidth is enhanced from 10 GHz to 14.2 GHz. Thanks to this wide-band and low-noise performance, we were able to improve the sensitivity at an error rate of 1E-12 from non-error-free to -6.5 dBm. These results indicate that our technique is promising for cost-effective optical interconnects.

  6. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  7. An automated algorithm for determining photometric redshifts of quasars

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Zhang, Yanxia; Zhao, Yongheng

    2010-07-01

    We employ k-nearest neighbor algorithm (KNN) for photometric redshift measurement of quasars with the Fifth Data Release (DR5) of the Sloan Digital Sky Survey (SDSS). KNN is an instance learning algorithm where the result of new instance query is predicted based on the closest training samples. The regressor do not use any model to fit and only based on memory. Given a query quasar, we find the known quasars or (training points) closest to the query point, whose redshift value is simply assigned to be the average of the values of its k nearest neighbors. Three kinds of different colors (PSF, Model or Fiber) and spectral redshifts are used as input parameters, separatively. The combination of the three kinds of colors is also taken as input. The experimental results indicate that the best input pattern is PSF + Model + Fiber colors in all experiments. With this pattern, 59.24%, 77.34% and 84.68% of photometric redshifts are obtained within ▵z < 0.1, 0.2 and 0.3, respectively. If only using one kind of colors as input, the model colors achieve the best performance. However, when using two kinds of colors, the best result is achieved by PSF + Fiber colors. In addition, nearest neighbor method (k = 1) shows its superiority compared to KNN (k ≠ 1) for the given sample.

  8. A soft-computing methodology for noninvasive time-spatial temperature estimation.

    PubMed

    Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A

    2008-02-01

    The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.

  9. A methodology for accident analysis of fusion breeder blankets and its application to helium-cooled lead–lithium blanket

    DOE PAGES

    Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...

    2016-09-23

    'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less

  10. Decay Heat Removal in GEN IV Gas-Cooled Fast Reactors

    DOE PAGES

    Cheng, Lap-Yan; Wei, Thomas Y. C.

    2009-01-01

    The safety goal of the current designs of advanced high-temperature thermal gas-cooled reactors (HTRs) is that no core meltdown would occur in a depressurization event with a combination of concurrent safety system failures. This study focused on the analysis of passive decay heat removal (DHR) in a GEN IV direct-cycle gas-cooled fast reactor (GFR) which is based on the technology developments of the HTRs. Given the different criteria and design characteristics of the GFR, an approach different from that taken for the HTRs for passive DHR would have to be explored. Different design options based on maintaining core flow weremore » evaluated by performing transient analysis of a depressurization accident using the system code RELAP5-3D. The study also reviewed the conceptual design of autonomous systems for shutdown decay heat removal and recommends that future work in this area should be focused on the potential for Brayton cycle DHRs.« less

  11. Modeling, Simulation and Performance Analysis of Multiple-Input Multiple-Output (MIMO) Systems with Multicarrier Time Delay Diversity Modulation

    DTIC Science & Technology

    2005-09-01

    6. AUTHOR( S ) Muhammad Shahid 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943...5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME( S ) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING...streams which are assigned to the K subcarriers [1]. The symbol duration of the input serial data is ’sT with serial data rate of ’ 𔃻s s f T

  12. On-chip remote charger model using plasmonic island circuit

    NASA Astrophysics Data System (ADS)

    Ali, J.; Youplao, P.; Pornsuwancharoen, N.; Aziz, M. S.; Chiangga, S.; Amiri, I. S.; Punthawanunt, S.; Singh, G.; Yupapin, P.

    2018-06-01

    We propose the remote charger model using the light fidelity (LiFi) transmission and integrate microring resonator circuit. It consists of the stacked layers of silicon-graphene-gold materials known as a plasmonic island placed at the center of the modified add-drop filter. The input light power from the remote LiFi can enter into the island via a silicon waveguide. The optimized input power is obtained by the coupled micro-lens on the silicon surface. The induced electron mobility generated in the gold layer by the interfacing layer between silicon-graphene. This is the reversed interaction of the whispering gallery mode light power of the microring system, in which the generated power is fed back into the microring circuit. The electron mobility is the required output and obtained at the device ports and characterized for the remote current source applications. The obtained calculation results have shown that the output current of ∼2.5 × 10-11 AW-1, with the gold height of 1.0 μm and the input power of 5.0 W is obtained at the output port, which is shown the potential application for a short range free pace remote charger.

  13. Validation of Metrics as Error Predictors

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  14. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data.

    PubMed

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures.

  15. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data

    PubMed Central

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960–2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60–70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures. PMID:26901763

  16. Documentation for the 2014 update of the United States national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter M.; Mueller, Charles S.; Haller, Kathleen M.; Frankel, Arthur D.; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen C.; Boyd, Oliver S.; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nico; Wheeler, Russell L.; Williams, Robert A.; Olsen, Anna H.

    2014-01-01

    The national seismic hazard maps for the conterminous United States have been updated to account for new methods, models, and data that have been obtained since the 2008 maps were released (Petersen and others, 2008). The input models are improved from those implemented in 2008 by using new ground motion models that have incorporated about twice as many earthquake strong ground shaking data and by incorporating many additional scientific studies that indicate broader ranges of earthquake source and ground motion models. These time-independent maps are shown for 2-percent and 10-percent probability of exceedance in 50 years for peak horizontal ground acceleration as well as 5-hertz and 1-hertz spectral accelerations with 5-percent damping on a uniform firm rock site condition (760 meters per second shear wave velocity in the upper 30 m, VS30). In this report, the 2014 updated maps are compared with the 2008 version of the maps and indicate changes of plus or minus 20 percent over wide areas, with larger changes locally, caused by the modifications to the seismic source and ground motion inputs.

  17. Dynamic Event Tree advancements and control logic improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego

    The RAVEN code has been under development at the Idaho National Laboratory since 2012. Its main goal is to create a multi-purpose platform for the deploying of all the capabilities needed for Probabilistic Risk Assessment, uncertainty quantification, data mining analysis and optimization studies. RAVEN is currently equipped with three different sampling categories: Forward samplers (Monte Carlo, Latin Hyper Cube, Stratified, Grid Sampler, Factorials, etc.), Adaptive Samplers (Limit Surface search, Adaptive Polynomial Chaos, etc.) and Dynamic Event Tree (DET) samplers (Deterministic and Adaptive Dynamic Event Trees). The main subject of this document is to report the activities that have been donemore » in order to: start the migration of the RAVEN/RELAP-7 control logic system into MOOSE, and develop advanced dynamic sampling capabilities based on the Dynamic Event Tree approach. In order to provide to all MOOSE-based applications a control logic capability, in this Fiscal Year an initial migration activity has been initiated, moving the control logic system, designed for RELAP-7 by the RAVEN team, into the MOOSE framework. In this document, a brief explanation of what has been done is going to be reported. The second and most important subject of this report is about the development of a Dynamic Event Tree (DET) sampler named “Hybrid Dynamic Event Tree” (HDET) and its Adaptive variant “Adaptive Hybrid Dynamic Event Tree” (AHDET). As other authors have already reported, among the different types of uncertainties, it is possible to discern two principle types: aleatory and epistemic uncertainties. The classical Dynamic Event Tree is in charge of treating the first class (aleatory) uncertainties; the dependence of the probabilistic risk assessment and analysis on the epistemic uncertainties are treated by an initial Monte Carlo sampling (MCDET). From each Monte Carlo sample, a DET analysis is run (in total, N trees). The Monte Carlo employs a pre-sampling of the input space characterized by epistemic uncertainties. The consequent Dynamic Event Tree performs the exploration of the aleatory space. In the RAVEN code, a more general approach has been developed, not limiting the exploration of the epistemic space through a Monte Carlo method but using all the forward sampling strategies RAVEN currently employs. The user can combine a Latin Hyper Cube, Grid, Stratified and Monte Carlo sampling in order to explore the epistemic space, without any limitation. From this pre-sampling, the Dynamic Event Tree sampler starts its aleatory space exploration. As reported by the authors, the Dynamic Event Tree is a good fit to develop a goal-oriented sampling strategy. The DET is used to drive a Limit Surface search. The methodology that has been developed by the authors last year, performs a Limit Surface search in the aleatory space only. This report documents how this approach has been extended in order to consider the epistemic space interacting with the Hybrid Dynamic Event Tree methodology.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouxelin, Pascal Nicolas; Strydom, Gerhard

    Best-estimate plus uncertainty analysis of reactors is replacing the traditional conservative (stacked uncertainty) method for safety and licensing analysis. To facilitate uncertainty analysis applications, a comprehensive approach and methodology must be developed and applied. High temperature gas cooled reactors (HTGRs) have several features that require techniques not used in light-water reactor analysis (e.g., coated-particle design and large graphite quantities at high temperatures). The International Atomic Energy Agency has therefore launched the Coordinated Research Project on HTGR Uncertainty Analysis in Modeling to study uncertainty propagation in the HTGR analysis chain. The benchmark problem defined for the prismatic design is represented bymore » the General Atomics Modular HTGR 350. The main focus of this report is the compilation and discussion of the results obtained for various permutations of Exercise I 2c and the use of the cross section data in Exercise II 1a of the prismatic benchmark, which is defined as the last and first steps of the lattice and core simulation phases, respectively. The report summarizes the Idaho National Laboratory (INL) best estimate results obtained for Exercise I 2a (fresh single-fuel block), Exercise I 2b (depleted single-fuel block), and Exercise I 2c (super cell) in addition to the first results of an investigation into the cross section generation effects for the super-cell problem. The two dimensional deterministic code known as the New ESC based Weighting Transport (NEWT) included in the Standardized Computer Analyses for Licensing Evaluation (SCALE) 6.1.2 package was used for the cross section evaluation, and the results obtained were compared to the three dimensional stochastic SCALE module KENO VI. The NEWT cross section libraries were generated for several permutations of the current benchmark super-cell geometry and were then provided as input to the Phase II core calculation of the stand alone neutronics Exercise II 1a. The steady state core calculations were simulated with the INL coupled-code system known as the Parallel and Highly Innovative Simulation for INL Code System (PHISICS) and the system thermal-hydraulics code known as the Reactor Excursion and Leak Analysis Program (RELAP) 5 3D using the nuclear data libraries previously generated with NEWT. It was observed that significant differences in terms of multiplication factor and neutron flux exist between the various permutations of the Phase I super-cell lattice calculations. The use of these cross section libraries only leads to minor changes in the Phase II core simulation results for fresh fuel but shows significantly larger discrepancies for spent fuel cores. Furthermore, large incongruities were found between the SCALE NEWT and KENO VI results for the super cells, and while some trends could be identified, a final conclusion on this issue could not yet be reached. This report will be revised in mid 2016 with more detailed analyses of the super-cell problems and their effects on the core models, using the latest version of SCALE (6.2). The super-cell models seem to show substantial improvements in terms of neutron flux as compared to single-block models, particularly at thermal energies.« less

  19. Dynamic responses of railroad car models to vertical and lateral rail inputs

    NASA Technical Reports Server (NTRS)

    Sewall, J. L.; Parrish, R. V.; Durling, B. J.

    1971-01-01

    Simplified dynamic models were applied in a study of vibration in a high-speed railroad car. The mathematical models used were a four-degree-of-freedom model for vertical responses to vertical rail inputs and a ten-degree-of-freedom model for lateral response to lateral or rolling (cross-level) inputs from the rails. Elastic properties of the passenger car body were represented by bending and torsion of a uniform beam. Rail-to-car (truck) suspensions were modeled as spring-mass-dashpot oscillators. Lateral spring nonlinearities approximating certain complicated truck mechanisms were introduced. The models were excited by displacement and, in some cases, velocity inputs from the rails by both deterministic (including sinusoidal) and random input functions. Results were obtained both in the frequency and time domains. Solutions in the time domain for the lateral model were obtained for a wide variety of transient and random inputs generated on-line by an analog computer. Variations in one of the damping properties of the lateral car suspension gave large fluctuations in response over a range of car speeds for a given input. This damping coefficient was significant in reducing lateral car responses that were higher for nonlinear springs for three different inputs.

  20. Study report on combining diagnostic and therapeutic considerations with subsystem and whole-body simulation

    NASA Technical Reports Server (NTRS)

    Furukawa, S.

    1975-01-01

    Current applications of simulation models for clinical research described included tilt model simulation of orthostatic intolerance with hemorrhage, and modeling long term circulatory circulation. Current capabilities include: (1) simulation of analogous pathological states and effects of abnormal environmental stressors by the manipulation of system variables and changing inputs in various sequences; (2) simulation of time courses of responses of controlled variables by the altered inputs and their relationships; (3) simulation of physiological responses of treatment such as isotonic saline transfusion; (4) simulation of the effectiveness of a treatment as well as the effects of complication superimposed on an existing pathological state; and (5) comparison of the effectiveness of various treatments/countermeasures for a given pathological state. The feasibility of applying simulation models to diagnostic and therapeutic research problems is assessed.

  1. Ensemble forecasting of short-term system scale irrigation demands using real-time flow data and numerical weather predictions

    NASA Astrophysics Data System (ADS)

    Perera, Kushan C.; Western, Andrew W.; Robertson, David E.; George, Biju; Nawarathna, Bandara

    2016-06-01

    Irrigation demands fluctuate in response to weather variations and a range of irrigation management decisions, which creates challenges for water supply system operators. This paper develops a method for real-time ensemble forecasting of irrigation demand and applies it to irrigation command areas of various sizes for lead times of 1 to 5 days. The ensemble forecasts are based on a deterministic time series model coupled with ensemble representations of the various inputs to that model. Forecast inputs include past flow, precipitation, and potential evapotranspiration. These inputs are variously derived from flow observations from a modernized irrigation delivery system; short-term weather forecasts derived from numerical weather prediction models and observed weather data available from automatic weather stations. The predictive performance for the ensemble spread of irrigation demand was quantified using rank histograms, the mean continuous rank probability score (CRPS), the mean CRPS reliability and the temporal mean of the ensemble root mean squared error (MRMSE). The mean forecast was evaluated using root mean squared error (RMSE), Nash-Sutcliffe model efficiency (NSE) and bias. The NSE values for evaluation periods ranged between 0.96 (1 day lead time, whole study area) and 0.42 (5 days lead time, smallest command area). Rank histograms and comparison of MRMSE, mean CRPS, mean CRPS reliability and RMSE indicated that the ensemble spread is generally a reliable representation of the forecast uncertainty for short lead times but underestimates the uncertainty for long lead times.

  2. Modeling a Full Coronal Loop Observed with Hinode EIS and SDO AIA

    NASA Technical Reports Server (NTRS)

    Alexander, Caroline; Winebarger, Amy R.

    2015-01-01

    Physical parameters measured from an observation of a coronal loop from Gupta et al. (2015) using Hinode/EIS and SDO/AIA were used as input for the hydrodynamic, impulsively heating NRLSOFM 1-­d loop model. The model was run at eight different energy inputs and used the measured quantities of temperature (0.73 MK), density (10(sup 8.5)cm(sup -3) and minimum loop lifetime to evaluate the success of the model at recreating the observations. The loop was measured by us to have an unprojected length of 236 Mm and was assumed to be almost perpendicular to the solar surface (tilt of 3.5 degrees) and have a dipolar geometry. Our results show that two of our simulation runs (with input energies of 0.01 and 0.02 ergs cm(sup -3)S(sup -1) closely match the temperature/density combination exhibited by the loop observation. However, our simulated loops only remain in the temperature sensitive region of the Mg 278.4 Angstrom filter for 500 and 800 seconds respectively which is less than the 1200 seconds that the loop is observed for with EIS in order to make the temperature/density measurements over the loop's entire length. This leads us to conclude that impulsive heating of a single loop is not complex enough to explain this observation. Additional steady heating or a collection of additional strands along the line-­of-­sight would help to align the simulation with the observation.

  3. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  4. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  5. Effect of pore water velocities and solute input methods on chloride transport in the undisturbed soil columns of Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhou, BeiBei; Wang, QuanJiu

    2017-09-01

    Studies on solute transport under different pore water velocity and solute input methods in undisturbed soil could play instructive roles for crop production. Based on the experiments in the laboratory, the effect of solute input methods with small pulse input and large pulse input, as well as four pore water velocities, on chloride transport in the undisturbed soil columns obtained from the Loess Plateau under controlled condition was studied. Chloride breakthrough curves (BTCs) were generated using the miscible displacement method under water-saturated, steady flow conditions. Using the 0.15 mol L-1 CaCl2 solution as a tracer, a small pulse (0.1 pore volumes) was first induced, and then, after all the solution was wash off, a large pulse (0.5 pore volumes) was conducted. The convection-dispersion equation (CDE) and the two-region model (T-R) were used to describe the BTCs, and their prediction accuracies and fitted parameters were compared as well. All the BTCs obtained for the different input methods and the four pore water velocities were all smooth. However, the shapes of the BTCs varied greatly; small pulse inputs resulted in more rapid attainment of peak values that appeared earlier with increases in pore water velocity, whereas large pulse inputs resulted in an opposite trend. Both models could fit the experimental data well, but the prediction accuracy of the T-R was better. The values of the dispersivity, λ, calculated from the dispersion coefficient obtained from the CDE were about one order of magnitude larger than those calculated from the dispersion coefficient given by the T-R, but the calculated Peclet number, Pe, was lower. The mobile-immobile partition coefficient, β, decreased, while the mass exchange coefficient increased with increases in pore water velocity.

  6. Integration of cortical and pallidal inputs in the basal ganglia-recipient thalamus of singing birds

    PubMed Central

    Goldberg, Jesse H.; Farries, Michael A.

    2012-01-01

    The basal ganglia-recipient thalamus receives inhibitory inputs from the pallidum and excitatory inputs from cortex, but it is unclear how these inputs interact during behavior. We recorded simultaneously from thalamic neurons and their putative synaptically connected pallidal inputs in singing zebra finches. We find, first, that each pallidal spike produces an extremely brief (∼5 ms) pulse of inhibition that completely suppresses thalamic spiking. As a result, thalamic spikes are entrained to pallidal spikes with submillisecond precision. Second, we find that the number of thalamic spikes that discharge within a single pallidal interspike interval (ISI) depends linearly on the duration of that interval but does not depend on pallidal activity prior to the interval. In a detailed biophysical model, our results were not easily explained by the postinhibitory “rebound” mechanism previously observed in anesthetized birds and in brain slices, nor could most of our data be characterized as “gating” of excitatory transmission by inhibitory pallidal input. Instead, we propose a novel “entrainment” mechanism of pallidothalamic transmission that highlights the importance of an excitatory conductance that drives spiking, interacting with brief pulses of pallidal inhibition. Building on our recent finding that cortical inputs can drive syllable-locked rate modulations in thalamic neurons during singing, we report here that excitatory inputs affect thalamic spiking in two ways: by shortening the latency of a thalamic spike after a pallidal spike and by increasing thalamic firing rates within individual pallidal ISIs. We present a unifying biophysical model that can reproduce all known modes of pallidothalamic transmission—rebound, gating, and entrainment—depending on the amount of excitation the thalamic neuron receives. PMID:22673333

  7. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  8. Utilizing Mars Global Reference Atmospheric Model (Mars-GRAM 2005) to Evaluate Entry Probe Mission Sites

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering-level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. The "auxiliary profile" option is one new feature of Mars-GRAM 2005. This option uses an input file of temperature and density versus altitude to replace the mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. Any source of data or alternate model output can be used to generate an auxiliary profile. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) and a global Thermal Emission Spectrometer (TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude-longitude bins and 15 degree Ls bins, for each of three Mars years of TES nadir data. The Mars Science Laboratory (MSL) sites are used as a sample of how Mars-GRAM' could be a valuable tool for planning of future Mars entry probe missions. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate MSL landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  9. Mars-GRAM Applications for Mars Science Laboratory Mission Site Selection Processes

    NASA Technical Reports Server (NTRS)

    Justh, Hilary; Justus, C. G.

    2007-01-01

    An overview is presented of the Mars-Global Reference Atmospheric Model (Mars-GRAM 2005) and its new features. One important new feature is the "auxiliary profile" option, whereby a simple input file is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Results are presented using auxiliary profiles produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) for three candidate Mars Science Laboratory (MSL) landing sites (Terby Crater, Melas Chasma, and Gale Crater). A global Thermal Emission Spectrometer (TES) database has also been generated for purposes of making 'Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude bins and 15 degree L(sub S) bins, for each of three Mars years of TES nadir data. Comparisons show reasonably good consistency between Mars-GRAM with low dust optical depth and both TES observed and mesoscale model simulated density at the three study sites. Mean winds differ by a more significant degree. Comparisons of mesoscale and TES standard deviations' with conventional Mars-GRAM values, show that Mars-GRAM density perturbations are somewhat conservative (larger than observed variability), while mesoscale-modeled wind variations are larger than Mars-GRAM model estimates. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  10. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  11. Random unitary evolution model of quantum Darwinism with pure decoherence

    NASA Astrophysics Data System (ADS)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  12. Net ecosystem production and organic carbon balance of U.S. East Coast estuaries: A synthesis approach

    USGS Publications Warehouse

    Herrmann, Maria; Najjar, Raymond G.; Kemp, W. Michael; Alexander, Richard B.; Boyer, Elizabeth W.; Cai, Wei-Jun; Griffith, Peter C.; Kroeger, Kevin D.; McCallister, S. Leigh; Smith, Richard A.

    2015-01-01

    Net ecosystem production (NEP) and the overall organic carbon budget for the estuaries along the East Coast of the United States are estimated. We focus on the open estuarine waters, excluding the fringing wetlands. We developed empirical models relating NEP to loading ratios of dissolved inorganic nitrogen to total organic carbon, and carbon burial in the sediment to estuarine water residence time and total nitrogen input across the landward boundary. Output from a data-constrained water quality model was used to estimate inputs of total nitrogen and organic carbon to the estuaries across the landward boundary, including fluvial and tidal-wetland sources. Organic carbon export from the estuaries to the continental shelf was computed by difference, assuming steady state. Uncertainties in the budget were estimated by allowing uncertainties in the supporting model relations. Collectively, U.S. East Coast estuaries are net heterotrophic, with the area-integrated NEP of −1.5 (−2.8, −1.0) Tg C yr−1 (best estimate and 95% confidence interval) and area-normalized NEP of −3.2 (−6.1, −2.3) mol C m−2 yr−1. East Coast estuaries serve as a source of organic carbon to the shelf, exporting 3.4 (2.0, 4.3) Tg C yr−1 or 7.6 (4.4, 9.5) mol C m−2 yr−1. Organic carbon inputs from fluvial and tidal-wetland sources for the region are estimated at 5.4 (4.6, 6.5) Tg C yr−1 or 12 (10, 14) mol C m−2 yr−1 and carbon burial in the open estuarine waters at 0.50 (0.33, 0.78) Tg C yr−1 or 1.1 (0.73, 1.7) mol C m−2 yr−1. Our results highlight the importance of estuarine systems in the overall coastal budget of organic carbon, suggesting that in the aggregate, U.S. East Coast estuaries assimilate (via respiration and burial) ~40% of organic carbon inputs from fluvial and tidal-wetland sources and allow ~60% to be exported to the shelf.

  13. Operational trends in the temperature of a high-pressure microwave powered sulfur lamp

    NASA Astrophysics Data System (ADS)

    Johnston, C. W.; Jonkers, J.; van der Mullen, J. J. A. M.

    2002-10-01

    Temperatures have been measured in a high-pressure microwave sulfur lamp using sulfur atomic lines found in the spectrum at 867, 921 and 1045 nm. The absolute intensities were determined for 3, 5 and 7 bar lamps at several input powers, ranging from 400 to 600 W. On average, temperatures are found to be 4.1+/-0.15 kK and increase slightly with increasing pressure and input power. These values and trends agree well with our simulations. However, the power trend is reversed to that demonstrated by the model, which might be an indication that the skin-depth model for the electric field may be incomplete.

  14. Pre-test analysis of protected loss of primary pump transients in CIRCE-HERO facility

    NASA Astrophysics Data System (ADS)

    Narcisi, V.; Giannetti, F.; Del Nevo, A.; Tarantino, M.; Caruso, G.

    2017-11-01

    In the frame of LEADER project (Lead-cooled European Advanced Demonstration Reactor), a new configuration of the steam generator for ALFRED (Advanced Lead Fast Reactor European Demonstrator) was proposed. The new concept is a super-heated steam generator, double wall bayonet tube type with leakage monitoring [1]. In order to support the new steam generator concept, in the framework of Horizon 2020 SESAME project (thermal hydraulics Simulations and Experiments for the Safety Assessment of MEtal cooled reactors), the ENEA CIRCE pool facility will be refurbished to host the HERO (Heavy liquid mEtal pRessurized water cOoled tubes) test section to investigate a bundle of seven full scale bayonet tubes in ALFRED-like thermal hydraulics conditions. The aim of this work is to verify thermo-fluid dynamic performance of HERO during the transition from nominal to natural circulation condition. The simulations have been performed with RELAP5-3D© by using the validated geometrical model of the previous CIRCE-ICE test section [2], in which the preceding heat exchanger has been replaced by the new bayonet bundle model. Several calculations have been carried out to identify thermal hydraulics performance in different steady state conditions. The previous calculations represent the starting points of transient tests aimed at investigating the operation in natural circulation. The transient tests consist of the protected loss of primary pump, obtained by reducing feed-water mass flow to simulate the activation of DHR (Decay Heat Removal) system, and of the loss of DHR function in hot conditions, where feed-water mass flow rate is absent. According to simulations, in nominal conditions, HERO bayonet bundle offers excellent thermal hydraulic behavior and, moreover, it allows the operation in natural circulation.

  15. Using recorded sound spectra profile as input data for real-time short-term urban road-traffic-flow estimation.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2012-10-01

    Road traffic has a heavy impact on the urban sound environment, constituting the main source of noise and widely dominating its spectral composition. In this context, our research investigates the use of recorded sound spectra as input data for the development of real-time short-term road traffic flow estimation models. For this, a series of models based on the use of Multilayer Perceptron Neural Networks, multiple linear regression, and the Fisher linear discriminant were implemented to estimate road traffic flow as well as to classify it according to the composition of heavy vehicles and motorcycles/mopeds. In view of the results, the use of the 50-400 Hz and 1-2.5 kHz frequency ranges as input variables in multilayer perceptron-based models successfully estimated urban road traffic flow with an average percentage of explained variance equal to 86%, while the classification of the urban road traffic flow gave an average success rate of 96.1%. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    NASA Astrophysics Data System (ADS)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.

  17. Inevitable end-of-21st-century trends toward earlier surface runoff timing in California's Sierra Nevada Mountains

    NASA Astrophysics Data System (ADS)

    Schwartz, M. A.; Hall, A. D.; Sun, F.; Walton, D.; Berg, N.

    2015-12-01

    Hybrid dynamical-statistical downscaling is used to produce surface runoff timing projections for California's Sierra Nevada, a high-elevation mountain range with significant seasonal snow cover. First, future climate change projections (RCP8.5 forcing scenario, 2081-2100 period) from five CMIP5 global climate models (GCMs) are dynamically downscaled. These projections reveal that future warming leads to a shift toward earlier snowmelt and surface runoff timing throughout the Sierra Nevada region. Relationships between warming and surface runoff timing from the dynamical simulations are used to build a simple statistical model that mimics the dynamical model's projected surface runoff timing changes given GCM input or other statistically-downscaled input. This statistical model can be used to produce surface runoff timing projections for other GCMs, periods, and forcing scenarios to quantify ensemble-mean changes, uncertainty due to intermodel variability and consequences stemming from choice of forcing scenario. For all CMIP5 GCMs and forcing scenarios, significant trends toward earlier surface runoff timing occur at elevations below 2500m. Thus, we conclude that trends toward earlier surface runoff timing by the end-of-the-21st century are inevitable. The changes to surface runoff timing diagnosed in this study have implications for many dimensions of climate change, including impacts on surface hydrology, water resources, and ecosystems.

  18. CHARACTERISTIC LENGTH SCALE OF INPUT DATA IN DISTRIBUTED MODELS: IMPLICATIONS FOR MODELING GRID SIZE. (R824784)

    EPA Science Inventory

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...

  19. Assessment and Application of the ROSE Code for Reactor Outage Thermal-Hydraulic and Safety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Thomas K.S.; Ko, F.-K.; Dai, L.-C

    The currently available tools, such as RELAP5, RETRAN, and others, cannot easily and correctly perform the task of analyzing the system behavior during plant outages. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as midloop operation (MLO) with loss of residual heat removal (RHR), has been developed. Important thermal-hydraulic processes involved during MLO with loss of RHR can be properly simulated by the newly developed reactor outage simulation and evaluation (ROSE) code. The two-region approach with a modified two-fluid model has been adopted to be the theoretical basis of the ROSE code.To verify the analytical modelmore » in the first step, posttest calculations against the integral midloop experiments with loss of RHR have been performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility test data is demonstrated. To further mature the ROSE code in simulating a full-sized pressurized water reactor, assessment against the WGOTHIC code and the Maanshan momentary-loss-of-RHR event has been undertaken. The successfully assessed ROSE code is then applied to evaluate the abnormal operation procedure (AOP) with loss of RHR during MLO (AOP 537.4) for the Maanshan plant. The ROSE code also has been successfully transplanted into the Maanshan training simulator to support operator training. How the simulator was upgraded by the ROSE code for MLO will be presented in the future.« less

  20. One-way coupling of an atmospheric and a hydrologic model in Colorado

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.

    2006-01-01

    This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.

  1. Modeling transport phenomena and uncertainty quantification in solidification processes

    NASA Astrophysics Data System (ADS)

    Fezi, Kyle S.

    Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.

  2. Prediction of PM2.5 along urban highway corridor under mixed traffic conditions using CALINE4 model.

    PubMed

    Dhyani, Rajni; Sharma, Niraj; Maity, Animesh Kumar

    2017-08-01

    The present study deals with spatial-temporal distribution of PM 2.5 along a highly trafficked national highway corridor (NH-2) in Delhi, India. Population residing in areas near roads and highways of high vehicular activities are exposed to high levels of PM 2.5 resulting in various health issues. The spatial extent of PM 2.5 has been assessed with the help of CALINE4 model. Various input parameters of the model were estimated and used to predict PM 2.5 concentration along the selected highway corridor. The results indicated that there are many factors involved which affects the prediction of PM 2.5 concentration by CALINE4 model. In fact, these factors either not considered by model or have little influence on model's prediction capabilities. Therefore, in the present study CALINE4 model performance was observed to be unsatisfactory for prediction of PM 2.5 concentration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  4. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  5. Linking nutrient inputs, phytoplankton composition, zooplankton dynamics and the recruitment of pink snapper, Chrysophrys auratus, in a temperate bay

    NASA Astrophysics Data System (ADS)

    Black, Kerry P.; Longmore, Andrew R.; Hamer, Paul A.; Lee, Randall; Swearer, Stephen E.; Jenkins, Gregory P.

    2016-12-01

    Survival of larval fish is often linked to production of preferred prey such as copepods, both inter- and intra-annually. In turn, copepod production depends not only the quantity of food, but also on the nutritional quality, edibility and/or toxicity of their micro-algal food. Hence, larval fish survival can become de-coupled from levels of nutrient input depending on the resulting composition of the plankton. Here we use a plankton dynamics model to study nutrient input, phytoplankton composition and copepod, Paracalanus, production in relation to interannual variation in recruitment of snapper, Chrysophrys auratus, in Port Phillip Bay, Australia. The model was able to simulate the ratio of diatoms to flagellates in the plume of the main river entering Port Phillip Bay. Interannual variability in the copepod, Paracalanus, abundance during the C. auratus spawning period over 5 years was accurately predicted. The seasonal peak in Paracalanus production depended on the timing and magnitude (match-mismatch) of nutrient inputs and how these were reflected in temporal change in the diatom:flagellate ratio. In turn, the model-predicted Paracalanus abundance was strongly related to inter-annaul variability in abundance of snapper, C. auratus, larvae over 7 years. Years of highest larval C. auratus abundance coincided with a matching of the spawning period with the peak in Paracalanus abundance. High freshwater flows and nutrient inputs led to an early seasonal dominance of diatoms, and consequently reduced abundances of copepods over the C. auratus spawning period with correspondingly low abundances of larvae. Conversely years of very low rainfall and nutrient input also led to low phytoplankton and copepod concentrations and larval C. auratus abundances. Highest abundances of larval C. auratus occurred in years of low to intermediate rainfall and nutrient inputs, particularly when pulses of nutrients occurred in the spring period, the latter supporting the match-mismatch hypothesis.

  6. Black-box modeling to estimate tissue temperature during radiofrequency catheter cardiac ablation: Feasibility study on an agar phantom model.

    PubMed

    Blasco-Gimenez, Ramón; Lequerica, Juan L; Herrero, Maria; Hornero, Fernando; Berjano, Enrique J

    2010-04-01

    The aim of this work was to study linear deterministic models to predict tissue temperature during radiofrequency cardiac ablation (RFCA) by measuring magnitudes such as electrode temperature, power and impedance between active and dispersive electrodes. The concept involves autoregressive models with exogenous input (ARX), which is a particular case of the autoregressive moving average model with exogenous input (ARMAX). The values of the mode parameters were determined from a least-squares fit of experimental data. The data were obtained from radiofrequency ablations conducted on agar models with different contact pressure conditions between electrode and agar (0 and 20 g) and different flow rates around the electrode (1, 1.5 and 2 L min(-1)). Half of all the ablations were chosen randomly to be used for identification (i.e. determination of model parameters) and the other half were used for model validation. The results suggest that (1) a linear model can be developed to predict tissue temperature at a depth of 4.5 mm during RF cardiac ablation by using the variables applied power, impedance and electrode temperature; (2) the best model provides a reasonably accurate estimate of tissue temperature with a 60% probability of achieving average errors better than 5 degrees C; (3) substantial errors (larger than 15 degrees C) were found only in 6.6% of cases and were associated with abnormal experiments (e.g. those involving the displacement of the ablation electrode) and (4) the impact of measuring impedance on the overall estimate is negligible (around 1 degrees C).

  7. Planning Study to Establish DoD Manufacturing Technology Information Analysis Center.

    DTIC Science & Technology

    1981-01-01

    model for an MTIAC. 5-3 I Type of information inputs from potential MTIAC sources. 5-5 5-3 Processing functions required to produce MTIAC outputs. 5-8...short supply * Energy conservation and concerns of energy inten- siveness of various manufacturing processes and systems required for production of DOD...not play a major role in the process of MT invention, innovation, or diffusion. MT productivity efforts for private industry are carried out by

  8. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  9. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  10. Neuromorphic Modeling of Moving Target Detection in Insects

    DTIC Science & Technology

    2007-12-31

    E S 100 - 0. 50 - - -- 0- 0.0 0.5 1.0 1.5 2.0 2.5 30 3.5 4.0 4.5 5.0 5.5 6.0 6.5 Time (s) Figure 4: Response of an SEMD to moving...D_v2Ift_std_rect_k3-1 p2 200- 1 5 0 -- -- -- - - -- - -- - - - - -- -- - - E 5 o o...governing differential equations are given in (3): x = input; z = output; Y = 1/2 (tanh[g (x - a)] + 1) O D E TANNER RESEARCH AND THE UNIVERSITY OF

  11. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  12. Effective moisture penetration depth model for residential buildings: Sensitivity analysis and guidance on model inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Jason; Winkler, Jon

    Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less

  13. Effective moisture penetration depth model for residential buildings: Sensitivity analysis and guidance on model inputs

    DOE PAGES

    Woods, Jason; Winkler, Jon

    2018-01-31

    Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to themore » interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.« less

  14. Carbon and water flux responses to physiology by environment interactions: a sensitivity analysis of variation in climate on photosynthetic and stomatal parameters

    NASA Astrophysics Data System (ADS)

    Bauerle, William L.; Daniels, Alex B.; Barnard, David M.

    2014-05-01

    Sensitivity of carbon uptake and water use estimates to changes in physiology was determined with a coupled photosynthesis and stomatal conductance ( g s) model, linked to canopy microclimate with a spatially explicit scheme (MAESTRA). The sensitivity analyses were conducted over the range of intraspecific physiology parameter variation observed for Acer rubrum L. and temperate hardwood C3 (C3) vegetation across the following climate conditions: carbon dioxide concentration 200-700 ppm, photosynthetically active radiation 50-2,000 μmol m-2 s-1, air temperature 5-40 °C, relative humidity 5-95 %, and wind speed at the top of the canopy 1-10 m s-1. Five key physiological inputs [quantum yield of electron transport ( α), minimum stomatal conductance ( g 0), stomatal sensitivity to the marginal water cost of carbon gain ( g 1), maximum rate of electron transport ( J max), and maximum carboxylation rate of Rubisco ( V cmax)] changed carbon and water flux estimates ≥15 % in response to climate gradients; variation in α, J max, and V cmax input resulted in up to ~50 and 82 % intraspecific and C3 photosynthesis estimate output differences respectively. Transpiration estimates were affected up to ~46 and 147 % by differences in intraspecific and C3 g 1 and g 0 values—two parameters previously overlooked in modeling land-atmosphere carbon and water exchange. We show that a variable environment, within a canopy or along a climate gradient, changes the spatial parameter effects of g 0, g 1, α, J max, and V cmax in photosynthesis- g s models. Since variation in physiology parameter input effects are dependent on climate, this approach can be used to assess the geographical importance of key physiology model inputs when estimating large scale carbon and water exchange.

  15. Analytical modeling of intumescent coating thermal protection system in a JP-5 fuel fire environment

    NASA Technical Reports Server (NTRS)

    Clark, K. J.; Shimizu, A. B.; Suchsland, K. E.; Moyer, C. B.

    1974-01-01

    The thermochemical response of Coating 313 when exposed to a fuel fire environment was studied to provide a tool for predicting the reaction time. The existing Aerotherm Charring Material Thermal Response and Ablation (CMA) computer program was modified to treat swelling materials. The modified code is now designated Aerotherm Transient Response of Intumescing Materials (TRIM) code. In addition, thermophysical property data for Coating 313 were analyzed and reduced for use in the TRIM code. An input data sensitivity study was performed, and performance tests of Coating 313/steel substrate models were carried out. The end product is a reliable computational model, the TRIM code, which was thoroughly validated for Coating 313. The tasks reported include: generation of input data, development of swell model and implementation in TRIM code, sensitivity study, acquisition of experimental data, comparisons of predictions with data, and predictions with intermediate insulation.

  16. The modeling and simulation of visuospatial working memory

    PubMed Central

    Liang, Lina; Zhang, Zhikang

    2010-01-01

    Camperi and Wang (Comput Neurosci 5:383–405, 1998) presented a network model for working memory that combines intrinsic cellular bistability with the recurrent network architecture of the neocortex. While Fall and Rinzel (Comput Neurosci 20:97–107, 2006) replaced this intrinsic bistability with a biological mechanism-Ca2+ release subsystem. In this study, we aim to further expand the above work. We integrate the traditional firing-rate network with Ca2+ subsystem-induced bistability, amend the synaptic weights and suggest that Ca2+ concentration only increase the efficacy of synaptic input but has nothing to do with the external input for the transient cue. We found that our network model maintained the persistent activity in response to a brief transient stimulus like that of the previous two models and the working memory performance was resistant to noise and distraction stimulus if Ca2+ subsystem was tuned to be bistable. PMID:22132045

  17. Competing pathways in drug metabolism. I. Effect of input concentration on the conjugation of gentisamide in the once-through in situ perfused rat liver preparation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, M.E.; Yuen, V.; Tang, B.K.

    1988-05-01

    Sulfation and glucuronidation are two parallel pathways for the metabolism of phenolic substrates. Gentisamide (GAM) was used as a model compound to examine the effects of parallel competing pathways on drug disappearance and metabolite formation in the once-through perfused rat liver preparation. GAM was found to form one glucuronide (GAM-5G) and two sulfate (GAM-2S and GAM-5S) conjugates. These GAM conjugates were biosynthesized in recirculating rat liver preparations, and were isolated by preparative high-performance liquid chromatography. Specific incorporation of 35S-sodium sulfate and (14C)glucose into GAM sulfate and glucuronide conjugates revealed corresponding elution patterns as labeled GAM metabolites. Their identities were characterizedmore » by enzymatic and acid hydrolyses and by NMR spectroscopy. Gentisamide-5-sulfate (GAM-5S) and gentisamide-5-glucuronide (GAM-5G) are major metabolites, and gentisamide-2-sulfate (GAM-2S) is a minor metabolite. Single-pass rat liver perfusions were used to examine the effect of stepwise increases/decreases of input GAM concentration (CIn) on the extraction ratio (E) of GAM and formation of metabolites. The E of GAM remained constant (about 0.89) at input concentrations from 0.9 to 120 microM and decreased at CIn greater than 120 microM. Metabolite patterns, however, changed with GAM CIn, even when E was constant at CIn up to 120 microM. GAM-5S was present as the major metabolite of GAM at all GAM CInS in most liver preparations but the proportions of GAM-5S and GAM-2S decreased at increasing CIn; the proportion of GAM-5G, a minor metabolite at low CIn, increased with increasing CIn. Biliary excretion rates at steady state accounted for 5.3 +/- 2.7% (mean +/- S.D.) of the input rate: GAM-5G was the predominant metabolite found.« less

  18. Impact of input field characteristics on vibrational femtosecond coherent anti-Stokes Raman scattering thermometry.

    PubMed

    Yang, Chao-Bo; He, Ping; Escofet-Martin, David; Peng, Jiang-Bo; Fan, Rong-Wei; Yu, Xin; Dunn-Rankin, Derek

    2018-01-10

    In this paper, three ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry approaches are summarized with a theoretical time-domain model. The difference between the approaches can be attributed to variations in the input field characteristics of the time-domain model. That is, all three approaches of ultrashort-pulse (CARS) thermometry can be simulated with the unified model by only changing the input fields features. As a specific example, the hybrid femtosecond/picosecond CARS is assessed for its use in combustion flow diagnostics; thus, the examination of the input field has an impact on thermometry focuses on vibrational hybrid femtosecond/picosecond CARS. Beginning with the general model of ultrashort-pulse CARS, the spectra with different input field parameters are simulated. To analyze the temperature measurement error brought by the input field impacts, the spectra are fitted and compared to fits, with the model neglecting the influence introduced by the input fields. The results demonstrate that, however the input pulses are depicted, temperature errors still would be introduced during an experiment. With proper field characterization, however, the significance of the error can be reduced.

  19. Shuttle cryogenic supply system optimization study. Volume 5A-1: Users manual for math models

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The Integrated Math Model for Cryogenic Systems is a flexible, broadly applicable systems parametric analysis tool. The program will effectively accommodate systems of considerable complexity involving large numbers of performance dependent variables such as are found in the individual and integrated cryogen systems. Basically, the program logic structure pursues an orderly progression path through any given system in much the same fashion as is employed for manual systems analysis. The system configuration schematic is converted to an alpha-numeric formatted configuration data table input starting with the cryogen consumer and identifying all components, such as lines, fittings, and valves, each in its proper order and ending with the cryogen supply source assembly. Then, for each of the constituent component assemblies, such as gas generators, turbo machinery, heat exchangers, and accumulators, the performance requirements are assembled in input data tabulations. Systems operating constraints and duty cycle definitions are further added as input data coded to the configuration operating sequence.

  20. Land-total and Ocean-total Precipitation and Evaporation from a Community Atmosphere Model version 5 Perturbed Parameter Ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Covey, Curt; Lucas, Donald D.; Trenberth, Kevin E.

    2016-03-02

    This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in additionmore » to one run with default inputparameter values.« less

  1. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  2. Reconstruction of spatially detailed global map of NH4+ and NO3- application in synthetic nitrogen fertilizer

    NASA Astrophysics Data System (ADS)

    Nishina, Kazuya; Ito, Akihiko; Hanasaki, Naota; Hayashi, Seiji

    2017-02-01

    Currently, available historical global N fertilizer map as an input data to global biogeochemical model is still limited and existing maps were not considered NH4+ and NO3- in the fertilizer application rates. This paper provides a method for constructing a new historical global nitrogen fertilizer application map (0.5° × 0.5° resolution) for the period 1961-2010 based on country-specific information from Food and Agriculture Organization statistics (FAOSTAT) and various global datasets. This new map incorporates the fraction of NH4+ (and NO3-) in N fertilizer inputs by utilizing fertilizer species information in FAOSTAT, in which species can be categorized as NH4+- and/or NO3--forming N fertilizers. During data processing, we applied a statistical data imputation method for the missing data (19 % of national N fertilizer consumption) in FAOSTAT. The multiple imputation method enabled us to fill gaps in the time-series data using plausible values using covariates information (year, population, GDP, and crop area). After the imputation, we downscaled the national consumption data to a gridded cropland map. Also, we applied the multiple imputation method to the available chemical fertilizer species consumption, allowing for the estimation of the NH4+ / NO3- ratio in national fertilizer consumption. In this study, the synthetic N fertilizer inputs in 2000 showed a general consistency with the existing N fertilizer map (Potter et al., 2010) in relation to the ranges of N fertilizer inputs. Globally, the estimated N fertilizer inputs based on the sum of filled data increased from 15 to 110 Tg-N during 1961-2010. On the other hand, the global NO3- input started to decline after the late 1980s and the fraction of NO3- in global N fertilizer decreased consistently from 35 to 13 % over a 50-year period. NH4+-forming fertilizers are dominant in most countries; however, the NH4+ / NO3- ratio in N fertilizer inputs shows clear differences temporally and geographically. This new map can be utilized as input data to global model studies and bring new insights for the assessment of historical terrestrial N cycling changes. Datasets available at doi:10.1594/PANGAEA.861203.

  3. A digital model for planning water management at Benton Lake National Wildlife Refuge, west-central Montana

    USGS Publications Warehouse

    Nimick, David A.; McCarthy, Peter M.; Fields, Vanessa

    2011-01-01

    Benton Lake National Wildlife Refuge is an important area for waterfowl production and migratory stopover in west-central Montana. Eight wetland units covering about 5,600 acres are the essential features of the refuge. Water availability for the wetland units can be uncertain owing to the large natural variations in precipitation and runoff and the high cost of pumping supplemental water. The U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, has developed a digital model for planning water management. The model can simulate strategies for water transfers among the eight wetland units and account for variability in runoff and pumped water. This report describes this digital model, which uses a water-accounting spreadsheet to track inputs and outputs to each of the wetland units of Benton Lake National Wildlife Refuge. Inputs to the model include (1) monthly values for precipitation, pumped water, runoff, and evaporation; (2) water-level/capacity data for each wetland unit; and (3) the pan-evaporation coefficient. Outputs include monthly water volume and flooded surface area for each unit for as many as 5 consecutive years. The digital model was calibrated by comparing simulated and historical measured water volumes for specific test years.

  4. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobromir Panayotov; Andrew Grief; Brad J. Merrill

    'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less

  6. Noise reduction in a Mach 5 wind tunnel with a rectangular rod-wall sound shield

    NASA Technical Reports Server (NTRS)

    Creel, T. R., Jr.; Keyes, J. W.; Beckwith, I. E.

    1980-01-01

    A rod wall sound shield was tested over a range of Reynolds numbers of 0.5 x 10 to the 7th power to 8.0 x 10 to the 7th power per meter. The model consisted of a rectangular array of longitudinal rods with boundary-layer suction through gaps between the rods. Suitable measurement techniques were used to determine properties of the flow and acoustic disturbance in the shield and transition in the rod boundary layers. Measurements indicated that for a Reynolds number of 1.5 x 10 to the 9th power the noise in the shielded region was significantly reduced, but only when the flow is mostly laminar on the rods. Actual nozzle input noise measured on the nozzle centerline before reflection at the shield walls was attenuated only slightly even when the rod boundary layer were laminar. At a lower Reynolds number, nozzle input noise at noise levels in the shield were still too high for application to a quiet tunnel. At Reynolds numbers above 2.0 x 10 the the 7th power per meter, measured noise levels were generally higher than nozzle input levels, probably due to transition in the rod boundary layers. The small attenuation of nozzle input noise at intermediate Reynolds numbers for laminar rod layers at the acoustic origins is apparently due to high frequencies of noise.

  7. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  8. Agricultural management affects below ground carbon input estimations

    NASA Astrophysics Data System (ADS)

    Hirte, Juliane; Leifeld, Jens; Abiven, Samuel; Oberholzer, Hans-Rudolf; Mayer, Jochen

    2017-04-01

    Root biomass and rhizodeposition carbon (C release by living roots) are among the most relevant root parameters for studies of plant response to environmental change, soil C modelling or estimations of soil C sequestration. Below ground C inputs of agricultural crops are typically estimated from above ground biomass or yield, thereby implying constant below to above ground C ratios. Agricultural management practices affect above ground biomass considerably; however, their effects on below ground C inputs are only poorly understood. Our aims were therefore to (i) quantify root biomass C and rhizodeposition C of maize and wheat grown in agricultural management systems with different fertilization intensities and (ii) determine management effects on below/above ground C ratios and vertical distribution of below ground C inputs into soil. We conducted a comprehensive field study on two Swiss long-term field trials, DOK (Basel) and ZOFE (Zurich), with silage (DOK) and grain (ZOFE) maize in 2013 and winter wheat in 2014 (ZOFE) and 2015 (DOK). Three treatments in DOK (2 bio-organic, 1 mixed conventional) and 4 treatments in ZOFE (1 without, 1 manure, 2 mineral fertilization) reflected increasing fertilization intensities. In each of 4 replicated field plots per treatment, one microplot (steel tube of 0.5m depth) was inserted into soil, covering an area of 0.1m2. The microplot plants were pulse-labelled with 13C-CO2 in weekly intervals throughout the respective growing season. After harvest, the microplot soil was sampled in three soil depths (0 - 0.25, 0.25 - 0.5, 0.5 - 0.75m), roots were separated from soil by picking and wet sieving, and root and soil samples were analysed for their δ13C values by IRMS. Carbon rhizodeposition was calculated from 13C-excess values in bulk soil and roots. (i) Average root biomasses of maize and wheat were 1.9 and 1.4 tha 1, respectively, in DOK and 0.9 and 1.1 tha 1, respectively, in ZOFE. Average amounts of C rhizodeposition of maize and wheat were 1.4 and 0.7 tha 1, respectively, in DOK and 0.5 and 0.6 tha 1, respectively, in ZOFE. Both root biomass and C rhizodeposition were similar among treatments on both sites but were significantly higher for silage maize (DOK) than for grain maize (ZOFE) and winter wheat (DOK and ZOFE). (ii) With increasing fertilization intensities, below/above ground C ratios of both maize and wheat significantly decreased from 0.43 to 0.16 for maize and 0.57 to 0.15 for wheat. Vertical distribution of below ground C inputs into soil was not affected by agricultural management but differed significantly between crops: In the subsoil (0.5 - 0.75m), below ground C inputs of wheat were twice as high as those of maize on both sites. Increasing fertilization intensity leads to a considerable increase in above ground biomass but does not affect below ground C inputs of maize and wheat on two Swiss agricultural sites. This finding shows that below ground C inputs cannot be estimated from above ground biomass in order to provide soil C models with input data. A differentiation according to the management system is strongly needed.

  9. When causality does not imply correlation: more spadework at the foundations of scientific psychology.

    PubMed

    Marken, Richard S; Horth, Brittany

    2011-06-01

    Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.

  10. Reduced-Order Model for the Geochemical Impacts of Carbon Dioxide, Brine and Trace Metal Leakage into an Unconfined, Oxidizing Carbonate Aquifer, Version 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, Diana H.

    2013-03-31

    The National Risk Assessment Partnership (NRAP) consists of 5 U.S DOE national laboratories collaborating to develop a framework for predicting the risks associated with carbon sequestration. The approach taken by NRAP is to divide the system into components, including injection target reservoirs, wellbores, natural pathways including faults and fractures, groundwater and the atmosphere. Next, develop a detailed, physics and chemistry-based model of each component. Using the results of the detailed models, develop efficient, simplified models, termed reduced order models (ROM) for each component. Finally, integrate the component ROMs into a system model that calculates risk profiles for the site. Thismore » report details the development of the Groundwater Geochemistry ROM for the Edwards Aquifer at PNNL. The Groundwater Geochemistry ROM for the Edwards Aquifer uses a Wellbore Leakage ROM developed at LANL as input. The detailed model, using the STOMP simulator, covers a 5x8 km area of the Edwards Aquifer near San Antonio, Texas. The model includes heterogeneous hydraulic properties, and equilibrium, kinetic and sorption reactions between groundwater, leaked CO2 gas, brine, and the aquifer carbonate and clay minerals. Latin Hypercube sampling was used to generate 1024 samples of input parameters. For each of these input samples, the STOMP simulator was used to predict the flux of CO2 to the atmosphere, and the volume, length and width of the aquifer where pH was less than the MCL standard, and TDS, arsenic, cadmium and lead exceeded MCL standards. In order to decouple the Wellbore Leakage ROM from the Groundwater Geochemistry ROM, the response surface was transformed to replace Wellbore Leakage ROM input parameters with instantaneous and cumulative CO2 and brine leakage rates. The most sensitive parameters proved to be the CO2 and brine leakage rates from the well, with equilibrium coefficients for calcite and dolomite, as well as the number of illite and kaolinite sorption sites proving to be of secondary importance. The Groundwater Geochemistry ROM was developed using nonlinear regression to fit the response surface with a quadratic polynomial. The goodness of fit was excellent for the CO2 flux to the atmosphere, and very good for predicting the volumes of groundwater exceeding the pH, TDS, As, Cd and Pb threshold values.« less

  11. Predicting neuroblastoma using developmental signals and a logic-based model.

    PubMed

    Kasemeier-Kulesa, Jennifer C; Schnell, Santiago; Woolley, Thomas; Spengler, Jennifer A; Morrison, Jason A; McKinney, Mary C; Pushel, Irina; Wolfe, Lauren A; Kulesa, Paul M

    2018-07-01

    Genomic information from human patient samples of pediatric neuroblastoma cancers and known outcomes have led to specific gene lists put forward as high risk for disease progression. However, the reliance on gene expression correlations rather than mechanistic insight has shown limited potential and suggests a critical need for molecular network models that better predict neuroblastoma progression. In this study, we construct and simulate a molecular network of developmental genes and downstream signals in a 6-gene input logic model that predicts a favorable/unfavorable outcome based on the outcome of the four cell states including cell differentiation, proliferation, apoptosis, and angiogenesis. We simulate the mis-expression of the tyrosine receptor kinases, trkA and trkB, two prognostic indicators of neuroblastoma, and find differences in the number and probability distribution of steady state outcomes. We validate the mechanistic model assumptions using RNAseq of the SHSY5Y human neuroblastoma cell line to define the input states and confirm the predicted outcome with antibody staining. Lastly, we apply input gene signatures from 77 published human patient samples and show that our model makes more accurate disease outcome predictions for early stage disease than any current neuroblastoma gene list. These findings highlight the predictive strength of a logic-based model based on developmental genes and offer a better understanding of the molecular network interactions during neuroblastoma disease progression. Copyright © 2018. Published by Elsevier B.V.

  12. User's manual for master: Modeling of aerodynamic surfaces by 3-dimensional explicit representation. [input to three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gibson, S. G.

    1983-01-01

    A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.

  13. Analysis of Strain Dependent Damping in Materials via Modeling of Material Point Hysteresis

    DTIC Science & Technology

    1991-07-01

    Read input quantities fromi input files. C RBAD(9:5STIi’LE RM(q.A)-)’)SY READ (9, -’(X)’-) RP.AD(9,’jA) ’)SN RFPD(9, ’(A)’ ) SALP 5 FOR.%MT(A) REA" (9...IBSN,IESK) CALL ShLRLN( SALP , IBSAL?, IESALP?) CALL SThLEN( BA, ThSA, IESA) CALL STRLEN(SYT,ZBSFT,J:ESFT) CA*LL STRLEN (Sa~l, IBSARI, IMSAM~) CALL STRI2N...CHAP.ACTER-󈨊 TITLE,SYS,SG,SN, SALP ,SGO,SGP,SA,SI’T CU1RXCT2R- ( ) SARI, SAýR2 ,SAR3, SAR4, SARS, SARI, SAR7,SARP PARAMETER (SARl-’g0-’) PARAMETPR (SAR2""Ys

  14. Altered Gastrointestinal Function in the Neuroligin-3 Mouse Model of Autism

    DTIC Science & Technology

    2013-10-01

    autism . Clearly, characterizing the dysfunction seen with the mixed 5 -HT3/ 5 -HT4 antagonist, tropisetron, when specific antagonists do not show such...gastrointestinal disorders. Current opinion in pharmacology 6, 547 (Dec, 2006). K. B. Neal, J. C. Bornstein, Mapping 5 - HT inputs to enteric neurons of...descending inhibition in guinea-pig ileum. PLoS ONE 10.1371/journal.pone.0040840 (2013) 3. ELLIS M, CHAMBERS JD, GWYNNE RM, BORNSTEIN JC Serotonin ( 5 - HT

  15. Evaluation of AISI 4140 Steel Repair Without Post-Weld Heat Treatment

    NASA Astrophysics Data System (ADS)

    Silva, Cleiton C.; de Albuquerque, Victor H. C.; Moura, Cícero R. O.; Aguiar, Willys M.; Farias, Jesualdo P.

    2009-04-01

    The present work evaluates the two-layer technique on the heat affected zone (HAZ) of AISI 4140 steel welded with different heat input levels between the first and second layer. The weld heat input levels selected by the Higuchi test were 5/5, 5/10, and 15/5 kJ/cm. The evaluation of the refining and/or tempering of the coarsened grain HAZ of the first layer was carried out using metallographic tests, microhardness measurements, and the Charpy-V impact test. The tempering of the first layer was only reached when the weld heat input ratio was 5/5 kJ/cm. The results of the Charpy-V impact test showed that the two-layer technique was efficient, from the point of view of toughness, since the toughness values reached were greater than the base metal for all weld heat input ratios applied. The results obtained indicate that the best performance of the two-layer deposition technique was for the weld heat input ratio 5/5 kJ/cm employing low heat input.

  16. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less

  17. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  18. Synaptic plasticity in a cerebellum-like structure depends on temporal order

    NASA Astrophysics Data System (ADS)

    Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty

    1997-05-01

    Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.

  19. Associative memory model with spontaneous neural activity

    NASA Astrophysics Data System (ADS)

    Kurikawa, Tomoki; Kaneko, Kunihiko

    2012-05-01

    We propose a novel associative memory model wherein the neural activity without an input (i.e., spontaneous activity) is modified by an input to generate a target response that is memorized for recall upon the same input. Suitable design of synaptic connections enables the model to memorize input/output (I/O) mappings equaling 70% of the total number of neurons, where the evoked activity distinguishes a target pattern from others. Spontaneous neural activity without an input shows chaotic dynamics but keeps some similarity with evoked activities, as reported in recent experimental studies.

  20. Field measurement of moisture-buffering model inputs for residential buildings

    DOE PAGES

    Woods, Jason; Winkler, Jon

    2016-02-05

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  1. Inputs and spatial distribution patterns of Cr in Jiaozhou Bay

    NASA Astrophysics Data System (ADS)

    Yang, Dongfang; Miao, Zhenqing; Huang, Xinmin; Wei, Linzhen; Feng, Ming

    2018-03-01

    Cr pollution in marine bays has been one of the critical environmental issues, and understanding the input and spatial distribution patterns is essential to pollution control. In according to the source strengths of the major pollution sources, the input patterns of pollutants to marine bay include slight, moderate and heavy, and the spatial distribution are corresponding to three block models respectively. This paper analyzed input patterns and distributions of Cr in Jiaozhou Bay, eastern China based on investigation on Cr in surface waters during 1979-1983. Results showed that the input strengths of Cr in Jiaozhou Bay could be classified as moderate input and slight input, and the input strengths were 32.32-112.30 μg L-1 and 4.17-19.76 μg L-1, respectively. The input patterns of Cr included two patterns of moderate input and slight input, and the horizontal distributions could be defined by means of Block Model 2 and Block Model 3, respectively. In case of moderate input pattern via overland runoff, Cr contents were decreasing from the estuaries to the bay mouth, and the distribution pattern was parallel. In case of moderate input pattern via marine current, Cr contents were decreasing from the bay mouth to the bay, and the distribution pattern was parallel to circular. The Block Models were able to reveal the transferring process of various pollutants, and were helpful to understand the distributions of pollutants in marine bay.

  2. Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.

    PubMed

    Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark

    2016-04-01

    Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  3. Disaggregated seismic hazard and the elastic input energy spectrum: An approach to design earthquake selection

    NASA Astrophysics Data System (ADS)

    Chapman, Martin Colby

    1998-12-01

    The design earthquake selection problem is fundamentally probabilistic. Disaggregation of a probabilistic model of the seismic hazard offers a rational and objective approach that can identify the most likely earthquake scenario(s) contributing to hazard. An ensemble of time series can be selected on the basis of the modal earthquakes derived from the disaggregation. This gives a useful time-domain realization of the seismic hazard, to the extent that a single motion parameter captures the important time-domain characteristics. A possible limitation to this approach arises because most currently available motion prediction models for peak ground motion or oscillator response are essentially independent of duration, and modal events derived using the peak motions for the analysis may not represent the optimal characterization of the hazard. The elastic input energy spectrum is an alternative to the elastic response spectrum for these types of analyses. The input energy combines the elements of amplitude and duration into a single parameter description of the ground motion that can be readily incorporated into standard probabilistic seismic hazard analysis methodology. This use of the elastic input energy spectrum is examined. Regression analysis is performed using strong motion data from Western North America and consistent data processing procedures for both the absolute input energy equivalent velocity, (Vsbea), and the elastic pseudo-relative velocity response (PSV) in the frequency range 0.5 to 10 Hz. The results show that the two parameters can be successfully fit with identical functional forms. The dependence of Vsbea and PSV upon (NEHRP) site classification is virtually identical. The variance of Vsbea is uniformly less than that of PSV, indicating that Vsbea can be predicted with slightly less uncertainty as a function of magnitude, distance and site classification. The effects of site class are important at frequencies less than a few Hertz. The regression modeling does not resolve significant effects due to site class at frequencies greater than approximately 5 Hz. Disaggregation of general seismic hazard models using Vsbea indicates that the modal magnitudes for the higher frequency oscillators tend to be larger, and vary less with oscillator frequency, than those derived using PSV. Insofar as the elastic input energy may be a better parameter for quantifying the damage potential of ground motion, its use in probabilistic seismic hazard analysis could provide an improved means for selecting earthquake scenarios and establishing design earthquakes for many types of engineering analyses.

  4. Assessing the required additional organic inputs to soils to reach the 4 per 1000 objective at the global scale: a RothC project

    NASA Astrophysics Data System (ADS)

    Lutfalla, Suzanne; Skalsky, Rastislav; Martin, Manuel; Balkovic, Juraj; Havlik, Petr; Soussana, Jean-François

    2017-04-01

    The 4 per 1000 Initiative underlines the role of soil organic matter in addressing the three-fold challenge of food security, adaptation of the land sector to climate change, and mitigation of human-induced GHG emissions. It sets an ambitious global target of a 0.4% (4/1000) annual increase in top soil organic carbon (SOC) stock. The present collaborative project between the 4 per 1000 research program, INRA and IIASA aims at providing a first global assessment of the translation of this soil organic carbon sequestration target into the equivalent organic matter inputs target. Indeed, soil organic carbon builds up in the soil through different processes leading to an increased input of carbon to the system (by increasing returns to the soil for instance) or a decreased output of carbon from the system (mainly by biodegradation and mineralization processes). Here we answer the question of how much extra organic matter must be added to agricultural soils every year (in otherwise unchanged climatic conditions) in order to guarantee a 0.4% yearly increase of total soil organic carbon stocks (40cm soil depth is considered). We use the RothC model of soil organic matter turnover on a spatial grid over 10 years to model two situations for croplands: a first situation where soil organic carbon remains constant (system at equilibrium) and a second situation where soil organic matter increases by 0.4% every year. The model accounts for the effects of soil type, temperature, moisture content and plant cover on the turnover process, it is run on a monthly time step, and it can simulate the needed organic input to sustain a certain SOC stock (or evolution of SOC stock). These two SOC conditions lead to two average yearly plant inputs over 10 years. The difference between the two simulated inputs represent the additional yearly input needed to reach the 4 per 1000 objective (input_eq for inputs needed for SOC to remain constant; input_4/1000 for inputs needed for SOC to reach the 4 per 1000 target). A spatial representation of this difference shows the distribution of the required returns to the soil. This first tool will provide the basis for the next steps: choosing and implementing practices to obtain the required additional input. Results will be presented from simulations at the regional scale (country: Slovakia) and at the global scale (0,5° grid resolution). Soil input data comes from the HWSD, climatic input data comes from AgMERRA climate dataset averaged of a 30 years period (1980-2010). They show that, at the global scale, given some data corrections which will be presented and discussed, the 4 per 1000 increase in top soil organic carbon can be reached with a median additional input of +0.89 tC/ha/year for cropland soils.

  5. Uncertainty analysis of the simulations of effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota

    USGS Publications Warehouse

    Wesolowski, Edwin A.

    1996-01-01

    Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.

  6. CalSimHydro Tool - A Web-based interactive tool for the CalSim 3.0 Hydrology Prepropessor

    NASA Astrophysics Data System (ADS)

    Li, P.; Stough, T.; Vu, Q.; Granger, S. L.; Jones, D. J.; Ferreira, I.; Chen, Z.

    2011-12-01

    CalSimHydro, the CalSim 3.0 Hydrology Preprocessor, is an application designed to automate the various steps in the computation of hydrologic inputs for CalSim 3.0, a water resources planning model developed jointly by California State Department of Water Resources and United States Bureau of Reclamation, Mid-Pacific Region. CalSimHydro consists of a five-step FORTRAN based program that runs the individual models in succession passing information from one model to the next and aggregating data as required by each model. The final product of CalSimHydro is an updated CalSim 3.0 state variable (SV) DSS input file. CalSimHydro consists of (1) a Rainfall-Runoff Model to compute monthly infiltration, (2) a Soil moisture and demand calculator (IDC) that estimates surface runoff, deep percolation, and water demands for natural vegetation cover and various crops other than rice, (3) a Rice Water Use Model to compute the water demands, deep percolation, irrigation return flow, and runoff from precipitation for the rice fields, (4) a Refuge Water Use Model that simulates the ponding operations for managed wetlands, and (5) a Data Aggregation and Transfer Module to aggregate the outputs from the above modules and transfer them to the CalSim SV input file. In this presentation, we describe a web-based user interface for CalSimHydro using Google Earth Plug-In. The CalSimHydro tool allows users to - interact with geo-referenced layers of the Water Budget Areas (WBA) and Demand Units (DU) displayed over the Sacramento Valley, - view the input parameters of the hydrology preprocessor for a selected WBA or DU in a time series plot or a tabular form, - edit the values of the input parameters in the table or by downloading a spreadsheet of the selected parameter in a selected time range, - run the CalSimHydro modules in the backend server and notify the user when the job is done, - visualize the model output and compare it with a base run result, - download the output SV file to be used to run CalSim 3.0. The CalSimHydro tool streamlines the complicated steps to configure and run the hydrology preprocessor by providing a user-friendly visual interface and back-end services to validate user inputs and manage the model execution. It is a powerful addition to the new CalSim 3.0 system.

  7. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  8. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  9. Economy-wide material input/output and dematerialization analysis of Jilin Province (China).

    PubMed

    Li, MingSheng; Zhang, HuiMin; Li, Zhi; Tong, LianJun

    2010-06-01

    In this paper, both direct material input (DMI) and domestic processed output (DPO) of Jilin Province in 1990-2006 were calculated and then based on these two indexes, a dematerialization model was established. The main results are summarized as follows: (1) both direct material input and domestic processed output increase at a steady rate during 1990-2006, with average annual growth rates of 4.19% and 2.77%, respectively. (2) The average contribution rate of material input to economic growth is 44%, indicating that the economic growth is visibly extensive. (3) During the studied period, accumulative quantity of material input dematerialization is 11,543 x 10(4) t and quantity of waste dematerialization is 5,987 x10(4) t. Moreover, dematerialization gaps are positive, suggesting that the potential of dematerialization has been well fulfilled. (4) In most years of the analyzed period, especially 2003-2006, the economic system of Jilin Province represents an unsustainable state. The accelerated economic growth relies mostly on excessive resources consumption after the Revitalization Strategy of Northeast China was launched.

  10. The human motor neuron pools receive a dominant slow‐varying common synaptic input

    PubMed Central

    Negro, Francesco; Yavuz, Utku Şükrü

    2016-01-01

    Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459

  11. Method and apparatus for loss of control inhibitor systems

    NASA Technical Reports Server (NTRS)

    A'Harrah, Ralph C. (Inventor)

    2007-01-01

    Active and adaptive systems and methods to prevent loss of control incidents by providing tactile feedback to a vehicle operator are disclosed. According to the present invention, an operator gives a control input to an inceptor. An inceptor sensor measures an inceptor input value of the control input. The inceptor input is used as an input to a Steady-State Inceptor Input/Effector Output Model that models the vehicle control system design. A desired effector output from the inceptor input is generated from the model. The desired effector output is compared to an actual effector output to get a distortion metric. A feedback force is generated as a function of the distortion metric. The feedback force is used as an input to a feedback force generator which generates a loss of control inhibitor system (LOCIS) force back to the inceptor. The LOCIS force is felt by the operator through the inceptor.

  12. Full Capability Formation Flight Control

    DTIC Science & Technology

    2005-02-01

    and ≤ 5 feet during thunderstorm level turbulence. Next, the 4 vortex wake of the lead aircraft will be modeled and the controller will be...be used to simulate the random effects of wind turbulence on the system. This model allows for the input of wind turbulence at three different ...Formation Vortex Interactions The other significant disturbance to be included in the two aircraft dynamic model is the effect of lead’s vortex wake on

  13. How sensitive are estimates of carbon fixation in agricultural models to input data?

    PubMed Central

    2012-01-01

    Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931

  14. Impacts of Stratospheric Black Carbon on Agriculture

    NASA Astrophysics Data System (ADS)

    Xia, L.; Robock, A.; Elliott, J. W.

    2017-12-01

    A regional nuclear war between India and Pakistan could inject 5 Tg of soot into the stratosphere, which would absorb sunlight, decrease global surface temperature by about 1°C for 5-10 years and have major impacts on precipitation and the amount of solar radiation reaching Earth's surface. Using two global gridded crop models forced by one global climate model simulation, we investigate the impacts on agricultural productivity in various nations. The crop model in the Community Land Model 4.5 (CLM-crop4.5) and the parallel Decision Support System for Agricultural Technology (pDSSAT) in the parallel System for Integrating Impact Models and Sectors are participating in the Global Gridded Crop Model Intercomparison. We force these two crop models with output from the Whole Atmospheric Community Climate Model to characterize the global agricultural impact from climate changes due to a regional nuclear war. Crops in CLM-crop4.5 include maize, rice, soybean, cotton and sugarcane, and crops in pDSSAT include maize, rice, soybean and wheat. Although the two crop models require a different time frequency of weather input, we downscale the climate model output to provide consistent temperature, precipitation and solar radiation inputs. In general, CLM-crop4.5 simulates a larger global average reduction of maize and soybean production relative to pDSSAT. Global rice production shows negligible change with climate anomalies from a regional nuclear war. Cotton and sugarcane benefit from a regional nuclear war from CLM-crop4.5 simulation, and global wheat production would decrease significantly in the pDSSAT simulation. The regional crop yield responses to a regional nuclear conflict are different for each crop, and we present the changes in production on a national basis. These models do not include the crop responses to changes in ozone, ultraviolet radiation, or diffuse radiation, and we would like to encourage more modelers to improve crop models to account for those impacts. We present these results as a demonstration of using different crop models to study this problem, and we invite more global crop modeling groups to use the same climate forcing, which we would be happy to provide, to gain a better understanding of global agricultural responses under different future climate scenarios with stratospheric aerosols.

  15. Incorporating induced seismicity in the 2014 United States National Seismic Hazard Model: results of the 2014 workshop and sensitivity studies

    USGS Publications Warehouse

    Petersen, Mark D.; Mueller, Charles S.; Moschetti, Morgan P.; Hoover, Susan M.; Rubinstein, Justin L.; Llenos, Andrea L.; Michael, Andrew J.; Ellsworth, William L.; McGarr, Arthur F.; Holland, Austin A.; Anderson, John G.

    2015-01-01

    The U.S. Geological Survey National Seismic Hazard Model for the conterminous United States was updated in 2014 to account for new methods, input models, and data necessary for assessing the seismic ground shaking hazard from natural (tectonic) earthquakes. The U.S. Geological Survey National Seismic Hazard Model project uses probabilistic seismic hazard analysis to quantify the rate of exceedance for earthquake ground shaking (ground motion). For the 2014 National Seismic Hazard Model assessment, the seismic hazard from potentially induced earthquakes was intentionally not considered because we had not determined how to properly treat these earthquakes for the seismic hazard analysis. The phrases “potentially induced” and “induced” are used interchangeably in this report, however it is acknowledged that this classification is based on circumstantial evidence and scientific judgment. For the 2014 National Seismic Hazard Model update, the potentially induced earthquakes were removed from the NSHM’s earthquake catalog, and the documentation states that we would consider alternative models for including induced seismicity in a future version of the National Seismic Hazard Model. As part of the process of incorporating induced seismicity into the seismic hazard model, we evaluate the sensitivity of the seismic hazard from induced seismicity to five parts of the hazard model: (1) the earthquake catalog, (2) earthquake rates, (3) earthquake locations, (4) earthquake Mmax (maximum magnitude), and (5) earthquake ground motions. We describe alternative input models for each of the five parts that represent differences in scientific opinions on induced seismicity characteristics. In this report, however, we do not weight these input models to come up with a preferred final model. Instead, we present a sensitivity study showing uniform seismic hazard maps obtained by applying the alternative input models for induced seismicity. The final model will be released after further consideration of the reliability and scientific acceptability of each alternative input model. Forecasting the seismic hazard from induced earthquakes is fundamentally different from forecasting the seismic hazard for natural, tectonic earthquakes. This is because the spatio-temporal patterns of induced earthquakes are reliant on economic forces and public policy decisions regarding extraction and injection of fluids. As such, the rates of induced earthquakes are inherently variable and nonstationary. Therefore, we only make maps based on an annual rate of exceedance rather than the 50-year rates calculated for previous U.S. Geological Survey hazard maps.

  16. Spiking and Excitatory/Inhibitory Input Dynamics of Barrel Cells in Response to Whisker Deflections of Varying Velocity and Angular Direction.

    PubMed

    Patel, Mainak

    2018-01-15

    The spiking of barrel regular-spiking (RS) cells is tuned for both whisker deflection direction and velocity. Velocity tuning arises due to thalamocortical (TC) synchrony (but not spike quantity) varying with deflection velocity, coupled with feedforward inhibition, while direction selectivity is not fully understood, though may be due partly to direction tuning of TC spiking. Data show that as deflection direction deviates from the preferred direction of an RS cell, excitatory input to the RS cell diminishes minimally, but temporally shifts to coincide with the time-lagged inhibitory input. This work constructs a realistic large-scale model of a barrel; model RS cells exhibit velocity and direction selectivity due to TC input dynamics, with the experimentally observed sharpening of direction tuning with decreasing velocity. The model puts forth the novel proposal that RS→RS synapses can naturally and simply account for the unexplained direction dependence of RS cell inputs - as deflection direction deviates from the preferred direction of an RS cell, and TC input declines, RS→RS synaptic transmission buffers the decline in total excitatory input and causes a shift in timing of the excitatory input peak from the peak in TC input to the delayed peak in RS input. The model also provides several experimentally testable predictions on the velocity dependence of RS cell inputs. This model is the first, to my knowledge, to study the interaction of direction and velocity and propose physiological mechanisms for the stimulus dependence in the timing and amplitude of RS cell inputs. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Replacing Fortran Namelists with JSON

    NASA Astrophysics Data System (ADS)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  18. Development of a 3D coupled physical-biogeochemical model for the Marseille coastal area (NW Mediterranean Sea): what complexity is required in the coastal zone?

    PubMed

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007-2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model.

  19. Development of a 3D Coupled Physical-Biogeochemical Model for the Marseille Coastal Area (NW Mediterranean Sea): What Complexity Is Required in the Coastal Zone?

    PubMed Central

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007–2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model. PMID:24324589

  20. Pandemic recovery analysis using the dynamic inoperability input-output model.

    PubMed

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  1. A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.

    1974-01-01

    The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances.

  2. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  3. Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs

    NASA Astrophysics Data System (ADS)

    Jayathilake, D. I.; Smith, T. J.

    2017-12-01

    Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.

  4. Branch Input Resistance and Steady Attenuation for Input to One Branch of a Dendritic Neuron Model

    PubMed Central

    Rall, Wilfrid; Rinzel, John

    1973-01-01

    Mathematical solutions and numerical illustrations are presented for the steady-state distribution of membrane potential in an extensively branched neuron model, when steady electric current is injected into only one dendritic branch. Explicit expressions are obtained for input resistance at the branch input site and for voltage attenuation from the input site to the soma; expressions for AC steady-state input impedance and attenuation are also presented. The theoretical model assumes passive membrane properties and the equivalent cylinder constraint on branch diameters. Numerical examples illustrate how branch input resistance and steady attenuation depend upon the following: the number of dendritic trees, the orders of dendritic branching, the electrotonic length of the dendritic trees, the location of the dendritic input site, and the input resistance at the soma. The application to cat spinal motoneurons, and to other neuron types, is discussed. The effect of a large dendritic input resistance upon the amount of local membrane depolarization at the synaptic site, and upon the amount of depolarization reaching the soma, is illustrated and discussed; simple proportionality with input resistance does not hold, in general. Also, branch input resistance is shown to exceed the input resistance at the soma by an amount that is always less than the sum of core resistances along the path from the input site to the soma. PMID:4715583

  5. A Lagrangian Subgridscale Model for Particle Transport Improvement and Application in the Adriatic Sea Using the Navy Coastal Ocean Model

    DTIC Science & Technology

    2006-12-01

    based on input statistical parameters , such as the turbulent velocity fluc- tuation and correlation time scale, without the need of an underlying...8217mVr) 2 + (ar, r- ;m Vm) 2 (8) Tr + Tm which is zero if the model and real parameters coincide. The correlation coefficient rmc between the...well correlated with the latter. The parameters estimated from the corrected velocity, Real(top), Model(mid), Corrected(bottom), Tm=1.5, Gm=l 0, Tr

  6. The EMEP MSC-W chemical transport model - Part 1: Model description

    NASA Astrophysics Data System (ADS)

    Simpson, D.; Benedictow, A.; Berge, H.; Bergström, R.; Emberson, L. D.; Fagerli, H.; Hayman, G. D.; Gauss, M.; Jonson, J. E.; Jenkin, M. E.; Nyíri, A.; Richter, C.; Semeena, V. S.; Tsyro, S.; Tuovinen, J.-P.; Valdebenito, Á.; Wind, P.

    2012-02-01

    The Meteorological Synthesizing Centre-West (MSC-W) of the European Monitoring and Evaluation Programme (EMEP) has been performing model calculations in support of the Convention on Long Range Transboundary Air Pollution (CLRTAP) for more than 30 yr. The EMEP MSC-W chemical transport model is still one of the key tools within European air pollution policy assessments. Traditionally, the EMEP model has covered all of Europe with a resolution of about 50 × 50 km2, and extending vertically from ground level to the tropopause (100 hPa). The model has undergone substantial development in recent years, and is now applied on scales ranging from local (ca. 5 km grid size) to global (with 1 degree resolution). The model is used to simulate photo-oxidants and both inorganic and organic aerosols. In 2008 the EMEP model was released for the first time as public domain code, along with all required input data for model runs for one year. Since then, many changes have been made to the model physics, and input data. The second release of the EMEP MSC-W model became available in mid 2011, and a new release is targeted for early 2012. This publication is intended to document this third release of the EMEP MSC-W model. The model formulations are given, along with details of input data-sets which are used, and brief background on some of the choices made in the formulation are presented. The model code itself is available at www.emep.int, along with the data required to run for a full year over Europe.

  7. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  8. Reduced risk estimations after remediation of lead (Pb) in drinking water at two US school districts.

    PubMed

    Triantafyllidou, Simoni; Le, Trung; Gallagher, Daniel; Edwards, Marc

    2014-01-01

    The risk of students to develop elevated blood lead from drinking water consumption at schools was assessed, which is a different approach from predictions of geometric mean blood lead levels. Measured water lead levels (WLLs) from 63 elementary schools in Seattle and 601 elementary schools in Los Angeles were acquired before and after voluntary remediation of water lead contamination problems. Combined exposures to measured school WLLs (first-draw and flushed, 50% of water consumption) and home WLLs (50% of water consumption) were used as inputs to the Integrated Exposure Uptake Biokinetic (IEUBK) model for each school. In Seattle an average 11.2% of students were predicted to exceed a blood lead threshold of 5 μg/dL across 63 schools pre-remediation, but predicted risks at individual schools varied (7% risk of exceedance at a "low exposure school", 11% risk at a "typical exposure school", and 31% risk at a "high exposure school"). Addition of water filters and removal of lead plumbing lowered school WLL inputs to the model, and reduced the predicted risk output to 4.8% on average for Seattle elementary students across all 63 schools. The remnant post-remediation risk was attributable to other assumed background lead sources in the model (air, soil, dust, diet and home WLLs), with school WLLs practically eliminated as a health threat. Los Angeles schools instead instituted a flushing program which was assumed to eliminate first-draw WLLs as inputs to the model. With assumed benefits of remedial flushing, the predicted average risk of students to exceed a BLL threshold of 5 μg/dL dropped from 8.6% to 6.0% across 601 schools. In an era with increasingly stringent public health goals (e.g., reduction of blood lead safety threshold from 10 to 5 μg/dL), quantifiable health benefits to students were predicted after water lead remediation at two large US school systems. © 2013.

  9. The Environmental Interference Effects Model of the Electromagnetic Environmental Test Facility. Volume 3. Computer Program Descriptions. Part 2. Preliminary Data Processing

    DTIC Science & Technology

    1980-09-01

    9-4 34. MISEQIP input, program control card ........................ 9-5 35 . MISEQIP input, use code data card...a 04r 0 Ad41.. 0 0 C w0 Go 410 Cn 0 fC V W W4 tm1 ZSM " *p z aos 0 u N In51 E-4 4-1 c5$a$ 41U 0 4.4 41 *oV hW4i C. a 0a)410 w S 41 s@ 5hi o41 jii 41...Z4 C24 0 co -0 a www w w Z. Z N 2W ~ 0V.# 0w 41v- * U- aN a4 ~ 0 1- 35 a.7 mi NJ.4 ow mt N ANNjA - N N NN * ~ ~ rz w w - -- - ~ n- -- 14 z 444 z U.4AA4A

  10. A Simulation Model Depicting Fleet Expansion Effects on the Fire Control Technicians Training Pipeline.

    DTIC Science & Technology

    1982-06-01

    35 682 SSIO S GN* I TR B(I NOW; NIE* 3uE 663 t010 JsUjUE(4.g),7,. SEL4 ; O-IE* UU 665 * SR T 2; TRAVEL To a SCHOOL 666 AO9 1j( ),jQ35:SLI; FJTfSL JAT...INPUT 66a tET. .ATR 4 .Q 36 SSLI; E 5LT ~u 664 ICT,,,PIPL; 87’ OPL As 5IGNvATRISI4)*S:5Tl SLI AS I’NATR1413IUT4Ow; Qk~~ UU 672 all Q3UE’Jf(4’.1 ,, SEL4 ...WAT RB() FLEE INPU m3; 689 012 OUQUEE52).,,, SEL4 ; LE -PT;EJ bQ0 T126 rREA TE9289O; 691 TgF54 692 CAEAtE*159a,1; OSVET INPUT IOENTIFIED 643 ACT: 694

  11. Sparse distributed memory and related models

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1992-01-01

    Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.

  12. Prediction and optimization of the laccase-mediated synthesis of the antimicrobial compound iodine (I2).

    PubMed

    Schubert, M; Fey, A; Ihssen, J; Civardi, C; Schwarze, F W M R; Mourad, S

    2015-01-10

    An artificial neural network (ANN) and genetic algorithm (GA) were applied to improve the laccase-mediated oxidation of iodide (I(-)) to elemental iodine (I2). Biosynthesis of iodine (I2) was studied with a 5-level-4-factor central composite design (CCD). The generated ANN network was mathematically evaluated by several statistical indices and revealed better results than a classical quadratic response surface (RS) model. Determination of the relative significance of model input parameters, ranking the process parameters in order of importance (pH>laccase>mediator>iodide), was performed by sensitivity analysis. ANN-GA methodology was used to optimize the input space of the neural network model to find optimal settings for the laccase-mediated synthesis of iodine. ANN-GA optimized parameters resulted in a 9.9% increase in the conversion rate. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  14. Multiloop Integral System Test (MIST): MIST Facility Functional Specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, T F; Koksal, C G; Moskal, T E

    1991-04-01

    The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST Functional Specification documents as-built design features, dimensions, instrumentation, and test approach. It also presents the scaling basis for the facility and serves to define the scope of work for the facility design and construction. 13 refs., 112 figs., 38 tabs.« less

  15. Pretest analysis document for Test S-NH-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Streit, J.E.; Owca, W.A.

    This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY3601 code for Semiscale MOD-2C Test S-NH-2. The test will simulate the transient that results from the shear in a small diameter penetration of a cold leg, equivalent to 2.1% of the cold leg flow area. The high pressure injection system is assumed to be inoperative throughout the transient. The recovery procedure consists of latching open both steam generator atmospheric dump valves, supplying both steam generators with auxiliary feedwater system is assumed to be partially inoperative so the auxiliary feedwater flow is degraded. Recovery will be initiated upon a peakmore » cladding temperature of 811/sup 0/K (1000/sup 0/F). The test will be terminated when primary pressure has been reduced to the low pressure injection system setpoint of 1.38 MPa (200 psia). The calculated results indicate that the test objectives can be achieved and the proposed test scenario poses no threat to personnel or to plant integrity. 7 refs., 16 figs., 2 tabs.« less

  16. Current and anticipated uses of thermal hydraulic codes in Korea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyung-Doo; Chang, Won-Pyo

    1997-07-01

    In Korea, the current uses of thermal hydraulic codes are categorized into 3 areas. The first application is in designing both nuclear fuel and NSSS. The codes have usually been introduced based on the technology transfer programs agreed between KAERI and the foreign vendors. Another area is in the supporting of the plant operations and licensing by the utility. The third category is research purposes. In this area assessments and some applications to the safety issue resolutions are major activities using the best estimate thermal hydraulic codes such as RELAP5/MOD3 and CATHARE2. Recently KEPCO plans to couple thermal hydraulic codesmore » with a neutronics code for the design of the evolutionary type reactor by 2004. KAERI also plans to develop its own best estimate thermal hydraulic code, however, application range is different from KEPCO developing code. Considering these activities, it is anticipated that use of the best estimate hydraulic analysis code developed in Korea may be possible in the area of safety evaluation within 10 years.« less

  17. Effect of Huperzine A on Aβ-induced p65 of astrocyte in vitro.

    PubMed

    Xie, Lushuang; Jiang, Cen; Wang, Zhang; Yi, Xiaohong; Gong, Yuanyuan; Chen, Yunhui; Fu, Yan

    2016-12-01

    Alzheimer's disease (AD) is the most common cause of dementia. Its pathology often accompanies inflammatory action, and astrocytes play important roles in such procedure. Rela(p65) is one of significant message factors in NF-κB pathway which has been reported high expression in astrocyte treated by Aβ. HupA, an alkaloid isolated from Chinese herb Huperzia serrata, has been widely used to treat AD and observations reflected that it improves memory and cognitive capacity of AD patients. To reveal its molecular mechanisms on p65, we cultured astrocytes, built Aβ-induced AD model, treated astrocytes with HupA at different concentrations, assayed cell viability with MTT, and detected p65 expression by immunohistochemistry and PCR. Our results revealed that treatment with 10 μM Aβ1-42 for 24 h induced a significant increase of NF-κB in astrocytes; HupA significantly down-regulated p65 expression induced by Aβ in astrocytes. This study infers that HupA can regulate NF-κB pathway to treat AD.

  18. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    ERIC Educational Resources Information Center

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  19. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.

  20. Global nitrogen and phosphorus fertilizer use for agriculture production in the past half century: shifted hot spots and nutrient imbalance

    NASA Astrophysics Data System (ADS)

    Lu, Chaoqun; Tian, Hanqin

    2017-03-01

    In addition to enhancing agricultural productivity, synthetic nitrogen (N) and phosphorous (P) fertilizer application in croplands dramatically alters global nutrient budget, water quality, greenhouse gas balance, and their feedback to the climate system. However, due to the lack of geospatial fertilizer input data, current Earth system and land surface modeling studies have to ignore or use oversimplified data (e.g., static, spatially uniform fertilizer use) to characterize agricultural N and P input over decadal or century-long periods. In this study, we therefore develop global time series gridded data of annual synthetic N and P fertilizer use rate in agricultural lands, matched with HYDE 3.2 historical land use maps, at a resolution of 0.5° × 0.5° latitude-longitude during 1961-2013. Our data indicate N and P fertilizer use rates on per unit cropland area increased by approximately 8 times and 3 times, respectively, since the year 1961 when IFA (International Fertilizer Industry Association) and FAO (Food and Agricultural Organization) surveys of country-level fertilizer input became available. Considering cropland expansion, the increase in total fertilizer consumption is even larger. Hotspots of agricultural N fertilizer application shifted from the US and western Europe in the 1960s to eastern Asia in the early 21st century. P fertilizer input shows a similar pattern with an additional current hotspot in Brazil. We found a global increase in fertilizer N / P ratio by 0.8 g N g-1 P per decade (p < 0.05) during 1961-2013, which may have an important global implication for human impacts on agroecosystem functions in the long run. Our data can serve as one of critical input drivers for regional and global models to assess the impacts of nutrient enrichment on climate system, water resources, food security, etc. Datasets available at doi:10.1594/PANGAEA.863323.

  1. The Processing and Interpretation of Verb Phrase Ellipsis Constructions by Children at Normal and Slowed Speech Rates

    PubMed Central

    Callahan, Sarah M.; Walenski, Matthew; Love, Tracy

    2013-01-01

    Purpose To examine children’s comprehension of verb phrase (VP) ellipsis constructions in light of their automatic, online structural processing abilities and conscious, metalinguistic reflective skill. Method Forty-two children ages 5 through 12 years listened to VP ellipsis constructions involving the strict/sloppy ambiguity (e.g., “The janitor untangled himself from the rope and the fireman in the elementary school did too after the accident.”) in which the ellipsis phrase (“did too”) had 2 interpretations: (a) strict (“untangled the janitor”) and (b) sloppy (“untangled the fireman”). We examined these sentences at a normal speech rate with an online cross-modal picture priming task (n = 14) and an offline sentence–picture matching task (n = 11). Both tasks were also given with slowed speech input (n = 17). Results Children showed priming for both the strict and sloppy interpretations at a normal speech rate but only for the strict interpretation with slowed input. Offline, children displayed an adultlike preference for the sloppy interpretation with normal-rate input but a divergent pattern with slowed speech. Conclusions Our results suggest that children and adults rely on a hybrid syntax-discourse model for the online comprehension and offline interpretation of VP ellipsis constructions. This model incorporates a temporally sensitive syntactic process of VP reconstruction (disrupted with slow input) and a temporally protracted discourse effect attributed to parallelism (preserved with slow input). PMID:22223886

  2. Application of a Two-Dimensional Reservoir Water-Quality Model of Beaver Lake, Arkansas, for the Evaluation of Simulated Changes in Input Water Quality, 2001-2003

    USGS Publications Warehouse

    Galloway, Joel M.; Green, W. Reed

    2007-01-01

    Beaver Lake is considered a primary watershed of concern in the State of Arkansas. As such, information is needed to assess water quality, especially nutrient enrichment, nutrient-algal relations, turbidity, and sediment issues within the system. A previously calibrated two-dimensional, laterally averaged model of hydrodynamics and water quality was used for the evaluation of changes in input nutrient and sediment concentrations on the water quality of the reservoir for the period of April 2001 to April 2003. Nitrogen and phosphorus concentrations were increased and decreased and tested independently and simultaneously to examine the nutrient concentrations and algal response in the reservoir. Suspended-solids concentrations were increased and decreased to identify how solids are distributed in the reservoir, which can contribute to decreased water clarity. The Beaver Lake model also was evaluated using a conservative tracer. A conservative tracer was applied at various locations in the reservoir model to observe the fate and transport and how the reservoir might react to the introduction of a conservative substance, or a worst-case spill scenario. In particular, tracer concentrations were evaluated at the locations of the four public water-supply intakes in Beaver Lake. Nutrient concentrations in Beaver Lake increased proportionally with increases in loads from the three main tributaries. An increase of 10 times the calibrated daily input nitrogen and phosphorus in the three main tributaries resulted in daily mean total nitrogen concentrations in the epilimnion that were nearly 4 times greater than the calibration concentrations at site L2 and more than 2 times greater than the calibrated concentrations at site L5. Increases in daily input nitrogen in the three main tributaries independently did not correspond in substantial increases in concentrations of nitrogen in Beaver Lake. The greatest proportional increase in phosphorus occurred in the epilimnion at sites L3 and L4 and the least increase occurred at sites L2 and L5 when calibrated daily input phosphorus concentrations were increased. When orthophosphorus was increased in all three tributaries simultaneously by a factor of 10, daily mean orthophosphorus concentrations in the epilimnion of the reservoir were almost 11 times greater than the calibrated concentrations at sites L2 and L5, and 15 times greater in the epilimnion of the reservoir at sites L3 and L4. Phosphorus concentrations in Beaver Lake increased less when nitrogen and phosphorus were increased simultaneously than when phosphorus was increased independently. The greatest simulated increase in algal biomass (represented as chlorophyll a) occurred when nitrogen and phosphorus were increased simultaneously in the three main tributaries. On average, the chlorophyll a values only increased less than 1 microgram per liter when concentrations of nitrogen or phosphorous were increased independently by a factor of 10 at all three tributaries. In comparison, when nitrogen and phosphorus were increased simultaneously by a factor of 10 for all three tributaries, the chlorophyll a concentration increased by about 10 micrograms per liter on average, with a maximum increase of about 57 micrograms per liter in the epilimnion at site L3 in Beaver Lake. Changes in algal biomass with changes in input nitrogen and phosphorus were variable through time in the Beaver Lake model from April 2001 to April 2003. When calibrated daily input nitrogen and phosphorus concentrations were increased simultaneously for the three main tributaries, the increase in chlorophyll a concentration was the greatest in late spring and summer of 2002. Changes in calibrated daily input inorganic suspended solids concentrations were examined because of the effect they may have on water clarity in Beaver Lake. The increase in total suspended solids was greatest in the hypolimnion at the upstream end of Beaver Lake, and negligible changes

  3. Altered Excitability and Local Connectivity of mPFC-PAG Neurons in a Mouse Model of Neuropathic Pain.

    PubMed

    Cheriyan, John; Sheets, Patrick L

    2018-05-16

    The medial prefrontal cortex (mPFC) plays a major role in both sensory and affective aspects of pain. There is extensive evidence that chronic pain produces functional changes within the mPFC. However, our understanding of local circuit changes to defined subpopulations of mPFC neurons in chronic pain models remains unclear. A major subpopulation of mPFC neurons project to the periaqueductal gray (PAG), which is a key midbrain structure involved in endogenous pain suppression and facilitation. Here, we used laser scanning photostimulation of caged glutamate to map cortical circuits of retrogradely labeled cortico-PAG (CP) neurons in layer 5 (L5) of mPFC in brain slices prepared from male mice having undergone chronic constriction injury (CCI) of the sciatic nerve. Whole-cell recordings revealed a significant reduction in excitability for L5 CP neurons contralateral to CCI in the prelimbic (PL), but not infralimbic (IL), region of mPFC. Circuit mapping showed that excitatory inputs to L5 CP neurons in both PL and IL arose primarily from layer 2/3 (L2/3) and were significantly reduced in CCI mice. Glutamate stimulation of L2/3 and L5 elicited inhibitory inputs to CP neurons in both PL and IL, but only L2/3 input was significantly reduced in CP neurons of CCI mice. We also observed significant reduction in excitability and L2/3 inhibitory input to CP neurons ipsilateral to CCI. These results demonstrating region and laminar specific changes to mPFC-PAG neurons suggest that a unilateral CCI bilaterally alters cortical circuits upstream of the endogenous analgesic network, which may contribute to persistence of chronic pain. SIGNIFICANCE STATEMENT Chronic pain is a significant unresolved medical problem that is refractory to traditional analgesics and can negatively affect emotional health. The role of central circuits in mediating the persistent nature of chronic pain remains unclear. Local circuits within the medial prefrontal cortex (mPFC) process ascending pain inputs and can modulate endogenous analgesia via direct projections to the periaqueductal gray (PAG). However, the mechanisms by which chronic pain alters intracortical circuitry of mPFC-PAG neurons are unknown. Here, we report specific changes to local circuits of mPFC-PAG neurons in mice displaying chronic pain behavior after nerve injury. These findings provide evidence for a neural mechanism by which chronic pain disrupts the descending analgesic system via functional changes to cortical circuits. Copyright © 2018 the authors 0270-6474/18/384829-11$15.00/0.

  4. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  5. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  6. Control of Turing patterns and their usage as sensors, memory arrays, and logic gates

    NASA Astrophysics Data System (ADS)

    Muzika, František; Schreiber, Igor

    2013-10-01

    We study a model system of three diffusively coupled reaction cells arranged in a linear array that display Turing patterns with special focus on the case of equal coupling strength for all components. As a suitable model reaction we consider a two-variable core model of glycolysis. Using numerical continuation and bifurcation techniques we analyze the dependence of the system's steady states on varying rate coefficient of the recycling step while the coupling coefficients of the inhibitor and activator are fixed and set at the ratios 100:1, 1:1, and 4:5. We show that stable Turing patterns occur at all three ratios but, as expected, spontaneous transition from the spatially uniform steady state to the spatially nonuniform Turing patterns occurs only in the first case. The other two cases possess multiple Turing patterns, which are stabilized by secondary bifurcations and coexist with stable uniform periodic oscillations. For the 1:1 ratio we examine modular spatiotemporal perturbations, which allow for controllable switching between the uniform oscillations and various Turing patterns. Such modular perturbations are then used to construct chemical computing devices utilizing the multiple Turing patterns. By classifying various responses we propose: (a) a single-input resettable sensor capable of reading certain value of concentration, (b) two-input and three-input memory arrays capable of storing logic information, (c) three-input, three-output logic gates performing combinations of logical functions OR, XOR, AND, and NAND.

  7. Microbial Communities Model Parameter Calculation for TSPA/SR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a newmore » qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.« less

  8. Wrapping Python around MODFLOW/MT3DMS based groundwater models

    NASA Astrophysics Data System (ADS)

    Post, V.

    2008-12-01

    Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.

  9. The EMEP MSC-W chemical transport model - technical description

    NASA Astrophysics Data System (ADS)

    Simpson, D.; Benedictow, A.; Berge, H.; Bergström, R.; Emberson, L. D.; Fagerli, H.; Flechard, C. R.; Hayman, G. D.; Gauss, M.; Jonson, J. E.; Jenkin, M. E.; Nyíri, A.; Richter, C.; Semeena, V. S.; Tsyro, S.; Tuovinen, J.-P.; Valdebenito, Á.; Wind, P.

    2012-08-01

    The Meteorological Synthesizing Centre-West (MSC-W) of the European Monitoring and Evaluation Programme (EMEP) has been performing model calculations in support of the Convention on Long Range Transboundary Air Pollution (CLRTAP) for more than 30 years. The EMEP MSC-W chemical transport model is still one of the key tools within European air pollution policy assessments. Traditionally, the model has covered all of Europe with a resolution of about 50 km × 50 km, and extending vertically from ground level to the tropopause (100 hPa). The model has changed extensively over the last ten years, however, with flexible processing of chemical schemes, meteorological inputs, and with nesting capability: the code is now applied on scales ranging from local (ca. 5 km grid size) to global (with 1 degree resolution). The model is used to simulate photo-oxidants and both inorganic and organic aerosols. In 2008 the EMEP model was released for the first time as public domain code, along with all required input data for model runs for one year. The second release of the EMEP MSC-W model became available in mid 2011, and a new release is targeted for summer 2012. This publication is intended to document this third release of the EMEP MSC-W model. The model formulations are given, along with details of input data-sets which are used, and a brief background on some of the choices made in the formulation is presented. The model code itself is available at www.emep.int, along with the data required to run for a full year over Europe.

  10. The role of size of input box, location of input box, input method and display size in Chinese handwriting performance and preference on mobile devices.

    PubMed

    Chen, Zhe; Rau, Pei-Luen Patrick

    2017-03-01

    This study presented two experiments on Chinese handwriting performance (time, accuracy, the number of protruding strokes and number of rewritings) and subjective ratings (mental workload, satisfaction, and preference) on mobile devices. Experiment 1 evaluated the effects of size of the input box, input method and display size on Chinese handwriting performance and preference. It was indicated that the optimal input sizes were 30.8 × 30.8 mm, 46.6 × 46.6 mm, 58.9 × 58.9 mm and 84.6 × 84.6 mm for devices with 3.5-inch, 5.5-inch, 7.0-inch and 9.7-inch display sizes, respectively. Experiment 2 proved the significant effects of location of the input box, input method and display size on Chinese handwriting performance and subjective ratings. It was suggested that the optimal location was central regardless of display size and input method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Finite difference time domain grid generation from AMC helicopter models

    NASA Technical Reports Server (NTRS)

    Cravey, Robin L.

    1992-01-01

    A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.

  12. Evaluating coastal landscape response to sea-level rise in the northeastern United States: approach and methods

    USGS Publications Warehouse

    Lentz, Erika E.; Stippa, Sawyer R.; Thieler, E. Robert; Plant, Nathaniel G.; Gesch, Dean B.; Horton, Radley M.

    2014-02-13

    The U.S. Geological Survey is examining effects of future sea-level rise on the coastal landscape from Maine to Virginia by producing spatially explicit, probabilistic predictions using sea-level projections, vertical land movement rates (due to isostacy), elevation data, and land-cover data. Sea-level-rise scenarios used as model inputs are generated by using multiple sources of information, including Coupled Model Intercomparison Project Phase 5 models following representative concentration pathways 4.5 and 8.5 in the Intergovernmental Panel on Climate Change Fifth Assessment Report. A Bayesian network is used to develop a predictive coastal response model that integrates the sea-level, elevation, and land-cover data with assigned probabilities that account for interactions with coastal geomorphology as well as the corresponding ecological and societal systems it supports. The effects of sea-level rise are presented as (1) level of landscape submergence and (2) coastal response type characterized as either static (that is, inundation) or dynamic (that is, landform or landscape change). Results are produced at a spatial scale of 30 meters for four decades (the 2020s, 2030s, 2050s, and 2080s). The probabilistic predictions can be applied to landscape management decisions based on sea-level-rise effects as well as on assessments of the prediction uncertainty and need for improved data or fundamental understanding. This report describes the methods used to produce predictions, including information on input datasets; the modeling approach; model outputs; data-quality-control procedures; and information on how to access the data and metadata online.

  13. Evaluating Coastal Landscape Response to Sea-Level Rise in the Northeastern United States - Approach and Methods

    NASA Technical Reports Server (NTRS)

    Lentz, Erika E.; Stippa, Sawyer R.; Thieler, E. Robert; Plant, Nathaniel G.; Gesch, Dean B.; Horton, Radley M.

    2015-01-01

    The U.S. Geological Survey is examining effects of future sea-level rise on the coastal landscape from Maine to Virginia by producing spatially explicit, probabilistic predictions using sea-level projections, vertical land movement rates (due to isostacy), elevation data, and land-cover data. Sea-level-rise scenarios used as model inputs are generated by using multiple sources of information, including Coupled Model Intercomparison Project Phase 5 models following representative concentration pathways 4.5 and 8.5 in the Intergovernmental Panel on Climate Change Fifth Assessment Report. A Bayesian network is used to develop a predictive coastal response model that integrates the sea-level, elevation, and land-cover data with assigned probabilities that account for interactions with coastal geomorphology as well as the corresponding ecological and societal systems it supports. The effects of sea-level rise are presented as (1) level of landscape submergence and (2) coastal response type characterized as either static (that is, inundation) or dynamic (that is, landform or landscape change). Results are produced at a spatial scale of 30 meters for four decades (the 2020s, 2030s, 2050s, and 2080s). The probabilistic predictions can be applied to landscape management decisions based on sea-level-rise effects as well as on assessments of the prediction uncertainty and need for improved data or fundamental understanding. This report describes the methods used to produce predictions, including information on input datasets; the modeling approach; model outputs; data-quality-control procedures; and information on how to access the data and metadata online.

  14. Critical role of non-muscle myosin light chain kinase in thrombin-induced endothelial cell inflammation and lung PMN infiltration.

    PubMed

    Fazal, Fabeha; Bijli, Kaiser M; Murrill, Matthew; Leonard, Antony; Minhajuddin, Mohammad; Anwar, Khandaker N; Finkelstein, Jacob N; Watterson, D Martin; Rahman, Arshad

    2013-01-01

    The pathogenesis of acute lung injury (ALI) involves bidirectional cooperation and close interaction between inflammatory and coagulation pathways. A key molecule linking coagulation and inflammation is the procoagulant thrombin, a serine protease whose concentration is elevated in plasma and lavage fluids of patients with ALI and acute respiratory distress syndrome (ARDS). However, little is known about the mechanism by which thrombin contributes to lung inflammatory response. In this study, we developed a new mouse model that permits investigation of lung inflammation associated with intravascular coagulation. Using this mouse model and in vitro approaches, we addressed the role of non-muscle myosin light chain kinase (nmMLCK) in thrombin-induced endothelial cell (EC) inflammation and lung neutrophil (PMN) infiltration. Our in vitro experiments revealed a key role of nmMLCK in ICAM-1 expression by its ability to control nuclear translocation and transcriptional capacity of RelA/p65 in EC. When subjected to intraperitoneal thrombin challenge, wild type mice showed a marked increase in lung PMN infiltration via expression of ICAM-1. However, these responses were markedly attenuated in mice deficient in nmMLCK. These results provide mechanistic insight into lung inflammatory response associated with intravascular coagulation and identify nmMLCK as a critical target for modulation of lung inflammation.

  15. Critical Role of Non-Muscle Myosin Light Chain Kinase in Thrombin-Induced Endothelial Cell Inflammation and Lung PMN Infiltration

    PubMed Central

    Fazal, Fabeha; Bijli, Kaiser M.; Murrill, Matthew; Leonard, Antony; Minhajuddin, Mohammad; Anwar, Khandaker N.; Finkelstein, Jacob N.; Watterson, D. Martin; Rahman, Arshad

    2013-01-01

    The pathogenesis of acute lung injury (ALI) involves bidirectional cooperation and close interaction between inflammatory and coagulation pathways. A key molecule linking coagulation and inflammation is the procoagulant thrombin, a serine protease whose concentration is elevated in plasma and lavage fluids of patients with ALI and acute respiratory distress syndrome (ARDS). However, little is known about the mechanism by which thrombin contributes to lung inflammatory response. In this study, we developed a new mouse model that permits investigation of lung inflammation associated with intravascular coagulation. Using this mouse model and in vitro approaches, we addressed the role of non-muscle myosin light chain kinase (nmMLCK) in thrombin-induced endothelial cell (EC) inflammation and lung neutrophil (PMN) infiltration. Our in vitro experiments revealed a key role of nmMLCK in ICAM-1 expression by its ability to control nuclear translocation and transcriptional capacity of RelA/p65 in EC. When subjected to intraperitoneal thrombin challenge, wild type mice showed a marked increase in lung PMN infiltration via expression of ICAM-1. However, these responses were markedly attenuated in mice deficient in nmMLCK. These results provide mechanistic insight into lung inflammatory response associated with intravascular coagulation and identify nmMLCK as a critical target for modulation of lung inflammation. PMID:23555849

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. A. Wasiolek

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the referencemore » biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).« less

  17. NEAMS Update. Quarterly Report for October - December 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.

    2012-02-16

    The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less

  18. The MSFC Solar Activity Future Estimation (MSAFE) Model

    NASA Technical Reports Server (NTRS)

    Suggs, Ron

    2017-01-01

    The Natural Environments Branch of the Engineering Directorate at Marshall Space Flight Center (MSFC) provides solar cycle forecasts for NASA space flight programs and the aerospace community. These forecasts provide future statistical estimates of sunspot number, solar radio 10.7 cm flux (F10.7), and the geomagnetic planetary index, Ap, for input to various space environment models. For example, many thermosphere density computer models used in spacecraft operations, orbital lifetime analysis, and the planning of future spacecraft missions require as inputs the F10.7 and Ap. The solar forecast is updated each month by executing MSAFE using historical and the latest month's observed solar indices to provide estimates for the balance of the current solar cycle. The forecasted solar indices represent the 13-month smoothed values consisting of a best estimate value stated as a 50 percentile value along with approximate +/- 2 sigma values stated as 95 and 5 percentile statistical values. This presentation will give an overview of the MSAFE model and the forecast for the current solar cycle.

  19. Solubility of organic compounds in octanol: Improved predictions based on the geometrical fragment approach.

    PubMed

    Mathieu, Didier

    2017-09-01

    Two new models are introduced to predict the solubility of chemicals in octanol (S oct ), taking advantage of the extensive character of log(S oct ) through a decomposition of molecules into so-called geometrical fragments (GF). They are extensively validated and their compliance with regulatory requirements is demonstrated. The first model requires just a molecular formula as input. Despite an extreme simplicity, it performs as well as an advanced random forest model involving 86 descriptors, with a root mean square error (RMSE) of 0.64 log units for an external test set of 100 molecules. For the second one, which requires the melting point T m as input, introducing GF descriptors reduces the RMSE from about 0.7 to <0.5 log units, a performance that could previously be obtained only through the use of Abraham descriptors. A script is provided for easy application of the models, taking into account the limits of their applicability domains. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less

  1. Atmospheric Nitrogen Inputs to the Ocean and their Impact

    NASA Astrophysics Data System (ADS)

    Jickells, Tim D.

    2016-04-01

    Atmospheric Nitrogen Inputs to the Ocean and their Impact T Jickells (1), K. Altieri (2), D. Capone (3), E. Buitenhuis (1), R. Duce (4), F. Dentener (5), K. Fennel (6), J. Galloway (7), M. Kanakidou (8), J. LaRoche (9), K. Lee (10), P. Liss (1), J. Middleburg (11), K. Moore (12), S. Nickovic (13), G. Okin (14), A. Oschilies (15), J. Prospero (16), M. Sarin (17), S. Seitzinger (18), J. Scharples (19), P. Suntharalingram (1), M. Uematsu (20), L. Zamora (21) Atmospheric nitrogen inputs to the ocean have been identified as an important source of nitrogen to the oceans which has increased greatly as a result of human activity. The significance of atmospheric inputs for ocean biogeochemistry were evaluated in a seminal paper by Duce et al., 2008 (Science 320, 893-7). In this presentation we will update the Duce et al 2008 study estimating the impact of atmospheric deposition on the oceans. We will summarise the latest model estimates of total atmospheric nitrogen deposition to the ocean, their chemical form (nitrate, ammonium and organic nitrogen) and spatial distribution from the TM4 model. The model estimates are somewhat smaller than the Duce et al estimate, but with similar spatial distributions. We will compare these flux estimates with a new estimate of the impact of fluvial nitrogen inputs on the open ocean (Sharples submitted) which estimates some transfer of fluvial nitrogen to the open ocean, particularly at low latitudes, compared to the complete trapping of fluvial inputs on the continental shelf assumed by Duce et al. We will then estimate the impact of atmospheric deposition on ocean primary productivity and N2O emissions from the oceans using the PlankTOM10 model. The impacts of atmospheric deposition we estimate on ocean productivity here are smaller than those predicted by Duce et al impacts, consistent with the smaller atmospheric deposition estimates. However, the atmospheric input is still larger than the estimated fluvial inputs to the open ocean, even with the increased transport across shelf to the open ocean from low latitude fluvial systems identified. 1. School of Environmental Science University of East Anglia UK 2. Energy Research Centre University of Cape Town SA 3. Department of Biological Sciences University of S California USA 4. Departments of Oceanography and Atmospheric Sciences Texas A&M University USA 5. JRC Ispra Italy 6. Department of Oceanography Dalhousie University Canada 7. Department of Environmental Sciences U. Virginia USA 8. Department of Chemistry, University of Crete, Greece 9. Department of Biology Dalhousie University, Canada 10. School of Environmental Science and Engineering Pohang University S Korea. 11. Faculty of Geosciences University of Utrecht Netherlands 12. Department of Earth System Science University of California at Irvine USA 13. WMO Geneva 14. Department of Geography University of California USA 15. GEOMAR Keil Germany 16. Department of Atmospheric Sciences, University of Miami, USA 17. Geosciences Division at Physical Research Laboratory, Ahmedabad, India 18. Department of Environmental Studies, University of Victoria, Canada 19. School of Environmentak Sciences, U Liverpool UK 20. Center for International Collaboration, Atmosphere and Ocean Research Institute, The University of Tokyo Japan 21. Oak Ridge Associated Universities USA

  2. The Integrated Medical Model: Statistical Forecasting of Risks to Crew Health and Mission Success

    NASA Technical Reports Server (NTRS)

    Fitts, M. A.; Kerstman, E.; Butler, D. J.; Walton, M. E.; Minard, C. G.; Saile, L. G.; Toy, S.; Myers, J.

    2008-01-01

    The Integrated Medical Model (IMM) helps capture and use organizational knowledge across the space medicine, training, operations, engineering, and research domains. The IMM uses this domain knowledge in the context of a mission and crew profile to forecast crew health and mission success risks. The IMM is most helpful in comparing the risk of two or more mission profiles, not as a tool for predicting absolute risk. The process of building the IMM adheres to Probability Risk Assessment (PRA) techniques described in NASA Procedural Requirement (NPR) 8705.5, and uses current evidence-based information to establish a defensible position for making decisions that help ensure crew health and mission success. The IMM quantitatively describes the following input parameters: 1) medical conditions and likelihood, 2) mission duration, 3) vehicle environment, 4) crew attributes (e.g. age, sex), 5) crew activities (e.g. EVA's, Lunar excursions), 6) diagnosis and treatment protocols (e.g. medical equipment, consumables pharmaceuticals), and 7) Crew Medical Officer (CMO) training effectiveness. It is worth reiterating that the IMM uses the data sets above as inputs. Many other risk management efforts stop at determining only likelihood. The IMM is unique in that it models not only likelihood, but risk mitigations, as well as subsequent clinical outcomes based on those mitigations. Once the mathematical relationships among the above parameters are established, the IMM uses a Monte Carlo simulation technique (a random sampling of the inputs as described by their statistical distribution) to determine the probable outcomes. Because the IMM is a stochastic model (i.e. the input parameters are represented by various statistical distributions depending on the data type), when the mission is simulated 10-50,000 times with a given set of medical capabilities (risk mitigations), a prediction of the most probable outcomes can be generated. For each mission, the IMM tracks which conditions occurred and decrements the pharmaceuticals and supplies required to diagnose and treat these medical conditions. If supplies are depleted, then the medical condition goes untreated, and crew and mission risk increase. The IMM currently models approximately 30 medical conditions. By the end of FY2008, the IMM will be modeling over 100 medical conditions, approximately 60 of which have been recorded to have occurred during short and long space missions.

  3. Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network

    PubMed Central

    Yan, Xiaofei; Cheng, Hong; Zhao, Yandong; Yu, Wenhua; Huang, Huan; Zheng, Xiaoliang

    2016-01-01

    Diverse sensing techniques have been developed and combined with machine learning method for forest fire detection, but none of them referred to identifying smoldering and flaming combustion phases. This study attempts to real-time identify different combustion phases using a developed wireless sensor network (WSN)-based multi-sensor system and artificial neural network (ANN). Sensors (CO, CO2, smoke, air temperature and relative humidity) were integrated into one node of WSN. An experiment was conducted using burning materials from residual of forest to test responses of each node under no, smoldering-dominated and flaming-dominated combustion conditions. The results showed that the five sensors have reasonable responses to artificial forest fire. To reduce cost of the nodes, smoke, CO2 and temperature sensors were chiefly selected through correlation analysis. For achieving higher identification rate, an ANN model was built and trained with inputs of four sensor groups: smoke; smoke and CO2; smoke and temperature; smoke, CO2 and temperature. The model test results showed that multi-sensor input yielded higher predicting accuracy (≥82.5%) than single-sensor input (50.9%–92.5%). Based on these, it is possible to reduce the cost with a relatively high fire identification rate and potential application of the system can be tested in future under real forest condition. PMID:27527175

  4. Analysis of hybrid electric/thermofluidic inputs for wet shape memory alloy actuators

    NASA Astrophysics Data System (ADS)

    Flemming, Leslie; Mascaro, Stephen

    2013-01-01

    A wet shape memory alloy (SMA) actuator is characterized by an SMA wire embedded within a compliant fluid-filled tube. Heating and cooling of the SMA wire produces a linear contraction and extension of the wire. Thermal energy can be transferred to and from the wire using combinations of resistive heating and free/forced convection. This paper analyzes the speed and efficiency of a simulated wet SMA actuator using a variety of control strategies involving different combinations of electrical and thermofluidic inputs. A computational fluid dynamics (CFD) model is used in conjunction with a temperature-strain model of the SMA wire to simulate the thermal response of the wire and compute strains, contraction/extension times and efficiency. The simulations produce cycle rates of up to 5 Hz for electrical heating and fluidic cooling, and up to 2 Hz for fluidic heating and cooling. The simulated results demonstrate efficiencies up to 0.5% for electric heating and up to 0.2% for fluidic heating. Using both electric and fluidic inputs concurrently improves the speed and efficiency of the actuator and allows for the actuator to remain contracted without continually delivering energy to the actuator, because of the thermal capacitance of the hot fluid. The characterized speeds and efficiencies are key requirements for implementing broader research efforts involving the intelligent control of electric and thermofluidic networks to optimize the speed and efficiency of wet actuator arrays.

  5. Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network.

    PubMed

    Yan, Xiaofei; Cheng, Hong; Zhao, Yandong; Yu, Wenhua; Huang, Huan; Zheng, Xiaoliang

    2016-08-04

    Diverse sensing techniques have been developed and combined with machine learning method for forest fire detection, but none of them referred to identifying smoldering and flaming combustion phases. This study attempts to real-time identify different combustion phases using a developed wireless sensor network (WSN)-based multi-sensor system and artificial neural network (ANN). Sensors (CO, CO₂, smoke, air temperature and relative humidity) were integrated into one node of WSN. An experiment was conducted using burning materials from residual of forest to test responses of each node under no, smoldering-dominated and flaming-dominated combustion conditions. The results showed that the five sensors have reasonable responses to artificial forest fire. To reduce cost of the nodes, smoke, CO₂ and temperature sensors were chiefly selected through correlation analysis. For achieving higher identification rate, an ANN model was built and trained with inputs of four sensor groups: smoke; smoke and CO₂; smoke and temperature; smoke, CO₂ and temperature. The model test results showed that multi-sensor input yielded higher predicting accuracy (≥82.5%) than single-sensor input (50.9%-92.5%). Based on these, it is possible to reduce the cost with a relatively high fire identification rate and potential application of the system can be tested in future under real forest condition.

  6. Calibration-induced uncertainty of the EPIC model to estimate climate change impact on global maize yield

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di

    2016-09-01

    Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.

  7. Gravitational field models for study of Earth mantle dynamics

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The tectonic forces or stresses due to the small scale mantle flow under the South American plate are detected and determined by utilizing the harmonics of the geopotential field model. The high degree harmonics are assumed to describe the small scale mantle convection patterns. The input data used in the derivation of this model is made up of 840,000 optical, electronic, and laser observations and 1,656 5 deg x 5 deg mean free air anomalies. Although there remain some statistically questionable aspects of the high degree harmonics, it seems appropriate now to explore their implications for the tectonic forces or stress field under the crust.

  8. Bioaccessibility of bioactive compounds after non-thermal processing of an exotic fruit juice blend sweetened with Stevia rebaudiana.

    PubMed

    Buniowska, Magdalena; Carbonell-Capella, Juana M; Frigola, Ana; Esteve, Maria J

    2017-04-15

    A comparative study of the bioaccessibility of bioactive compounds and antioxidant capacity in a fruit juice-Stevia rebaudiana mixture processed by pulsed electric fields (PEF), high voltage electrical discharges (HVED) and ultrasound (USN) technology at two equivalent energy inputs (32-256kJ/kg) was made using an in vitro model. Ascorbic acid was not detected following intestinal digestion, while HVED, PEF and USN treatments increased total carotenoid bioaccessibility. HVED at an energy input of 32kJ/kg improved bioaccessibility of phenolic compounds (34.2%), anthocyanins (31.0%) and antioxidant capacity (35.8%, 29.1%, 31.9%, for TEAC, ORAC and DPPH assay, respectively) compared to untreated sample. This was also observed for PEF treated samples at an energy input of 256kJ/kg (37.0%, 15.6%, 29.4%, 26.5%, 23.5% for phenolics, anthocyanins, and antioxidant capacity using TEAC, ORAC and DPPH method, respectively). Consequently, pulsed electric technologies (HVED and PEF) show good prospects for enhanced bioaccessibility of compounds with putative health benefit. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Influence evaluation of loading conditions during pressurized thermal shock transients based on thermal-hydraulics and structural analyses

    NASA Astrophysics Data System (ADS)

    Katsuyama, Jinya; Uno, Shumpei; Watanabe, Tadashi; Li, Yinsheng

    2018-03-01

    The thermal hydraulic (TH) behavior of coolant water is a key factor in the structural integrity assessments on reactor pressure vessels (RPVs) of pressurized water reactors (PWRs) under pressurized thermal shock (PTS) events, because the TH behavior may affect the loading conditions in the assessment. From the viewpoint of TH behavior, configuration of plant equipment and their dimensions, and operator action time considerably influence various parameters, such as the temperature and flow rate of coolant water and inner pressure. In this study, to investigate the influence of the operator action time on TH behavior during a PTS event, we developed an analysis model for a typical Japanese PWR plant, including the RPV and the main components of both primary and secondary systems, and performed TH analyses by using a system analysis code called RELAP5. We applied two different operator action times based on the Japanese and the United States (US) rules: Operators may act after 10 min (Japanese rules) and 30 min (the US rules) after the occurrence of PTS events. Based on the results of TH analysis with different operator action times, we also performed structural analyses for evaluating thermal-stress distributions in the RPV during PTS events as loading conditions in the structural integrity assessment. From the analysis results, it was clarified that differences in operator action times significantly affect TH behavior and loading conditions, as the Japanese rule may lead to lower stresses than that under the US rule because an earlier operator action caused lower pressure in the RPV.

  10. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  11. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  12. User's Manual for LINER: FORTRAN Code for the Numerical Simulation of Plane Wave Propagation in a Lined Two-Dimensional Channel

    NASA Technical Reports Server (NTRS)

    Reichert, R, S.; Biringen, S.; Howard, J. E.

    1999-01-01

    LINER is a system of Fortran 77 codes which performs a 2D analysis of acoustic wave propagation and noise suppression in a rectangular channel with a continuous liner at the top wall. This new implementation is designed to streamline the usage of the several codes making up LINER, resulting in a useful design tool. Major input parameters are placed in two main data files, input.inc and nurn.prm. Output data appear in the form of ASCII files as well as a choice of GNUPLOT graphs. Section 2 briefly describes the physical model. Section 3 discusses the numerical methods; Section 4 gives a detailed account of program usage, including input formats and graphical options. A sample run is also provided. Finally, Section 5 briefly describes the individual program files.

  13. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  14. Evaluating the sensitivity of agricultural model performance to different climate inputs

    PubMed Central

    Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.

    2017-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985

  15. Financial effect of instituting Deficit Reduction Act documentation requirements in family planning clinics in Oregon.

    PubMed

    Rodriguez, Maria Isabel; Angus, Lisa; Elman, Emily; Darney, Philip D; Caughey, Aaron B

    2011-06-01

    The study was conducted to estimate the long-term costs for implementing citizenship documentation requirements in a Medicaid expansion program for family planning services in Oregon. A decision-analytic model was developed using two perspectives: the state and society. Our primary outcome was future reproductive health care costs due to pregnancy in the next 5 years. A Markov structure was utilized to capture multiple future pregnancies. Model inputs were retrieved from the existing literature and local hospital and Medicaid data related to reimbursements. One-way and multi-way sensitivity analyses were conducted. A Monte Carlo simulation was performed to simultaneously incorporate uncertainty from all of the model inputs. Screening for citizenship results in a loss of $3119 over 5 years ($39,382 vs. $42,501) for the state and $4209 for society ($63,391 compared to $59,182) for adult women. Among adolescents, requiring proof of identity and citizenship results in a loss of $3123 for the state ($39,378 versus $42,501) and $4214 for society ($63,391 instead of $59,177). Screening for citizenship status in publicly funded family planning clinics leads to financial losses for the state and society. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network

    NASA Technical Reports Server (NTRS)

    Kuhn, D. Richard; Kacker, Raghu; Lei, Yu

    2010-01-01

    This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.

  17. The Development of a Model Design to Assess Instruction in Farm Management in Terms of Economic Returns and the Understanding of Economic Principles.

    ERIC Educational Resources Information Center

    Rolloff, John August

    The records of 27 farm operators participating in farm business analysis programs in 5 Ohio schools were studied to develop and test a model for determining the influence of the farm business analysis phase of vocational agriculture instruction in farm management. Economic returns were measured as ratios between 1965 program inputs and outputs…

  18. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  19. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-02-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  20. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  1. CubeSat mission design software tool for risk estimating relationships

    NASA Astrophysics Data System (ADS)

    Gamble, Katharine Brumbaugh; Lightsey, E. Glenn

    2014-09-01

    In an effort to make the CubeSat risk estimation and management process more scientific, a software tool has been created that enables mission designers to estimate mission risks. CubeSat mission designers are able to input mission characteristics, such as form factor, mass, development cycle, and launch information, in order to determine the mission risk root causes which historically present the highest risk for their mission. Historical data was collected from the CubeSat community and analyzed to provide a statistical background to characterize these Risk Estimating Relationships (RERs). This paper develops and validates the mathematical model based on the same cost estimating relationship methodology used by the Unmanned Spacecraft Cost Model (USCM) and the Small Satellite Cost Model (SSCM). The RER development uses general error regression models to determine the best fit relationship between root cause consequence and likelihood values and the input factors of interest. These root causes are combined into seven overall CubeSat mission risks which are then graphed on the industry-standard 5×5 Likelihood-Consequence (L-C) chart to help mission designers quickly identify areas of concern within their mission. This paper is the first to document not only the creation of a historical database of CubeSat mission risks, but, more importantly, the scientific representation of Risk Estimating Relationships.

  2. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    PubMed

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.

  3. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  4. Next-Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible

  5. Next Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William; Fitzgerald, Matthew; Stahl, Philip

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.

  6. Next Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.

  7. A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling

    NASA Astrophysics Data System (ADS)

    Chen, L.; Gong, Y.; Shen, Z.

    2015-11-01

    Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.

  8. Intracellular calcium dynamics permit a Purkinje neuron model to perform toggle and gain computations upon its inputs

    PubMed Central

    Forrest, Michael D.

    2014-01-01

    Without synaptic input, Purkinje neurons can spontaneously fire in a repeating trimodal pattern that consists of tonic spiking, bursting and quiescence. Climbing fiber input (CF) switches Purkinje neurons out of the trimodal firing pattern and toggles them between a tonic firing and a quiescent state, while setting the gain of their response to Parallel Fiber (PF) input. The basis to this transition is unclear. We investigate it using a biophysical Purkinje cell model under conditions of CF and PF input. The model can replicate these toggle and gain functions, dependent upon a novel account of intracellular calcium dynamics that we hypothesize to be applicable in real Purkinje cells. PMID:25191262

  9. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  10. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  11. Spatial modeling of wild bird risk factors to investigate highly pathogenic A(H5N1) avian influenza virus transmission

    USGS Publications Warehouse

    Prosser, Diann J.; Hungerford, Laura L.; Erwin, R. Michael; Ottinger, Mary Ann; Takekawa, John Y.; Newman, Scott H.; Xiao, Xianming; Ellis, Erie C.

    2016-01-01

    One of the longest-persisting avian influenza viruses in history, highly pathogenic avian influenza virus (HPAIV) A(H5N1), continues to evolve after 18 years, advancing the threat of a global pandemic. Wild waterfowl (family Anatidae), are reported as secondary transmitters of HPAIV, and primary reservoirs for low-pathogenic avian influenza viruses, yet spatial inputs for disease risk modeling for this group have been lacking. Using GIS and Monte Carlo simulations, we developed geospatial indices of waterfowl abundance at 1 and 30 km resolutions and for the breeding and wintering seasons for China, the epicenter of H5N1. Two spatial layers were developed: cumulative waterfowl abundance (WAB), a measure of predicted abundance across species, and cumulative abundance weighted by H5N1 prevalence (WPR), whereby abundance for each species was adjusted based on prevalence values then totaled across species. Spatial patterns of the model output differed between seasons, with higher WAB and WPR in the northern and western regions of China for the breeding season and in the southeast for the wintering season. Uncertainty measures indicated highest error in southeastern China for both WAB and WPR. We also explored the effect of resampling waterfowl layers from 1 km to 30 km resolution for multi-scale risk modeling. Results indicated low average difference (less than 0.16 and 0.01 standard deviations for WAB and WPR, respectively), with greatest differences in the north for the breeding season and southeast for the wintering season. This work provides the first geospatial models of waterfowl abundance available for China. The indices provide important inputs for modeling disease transmission risk at the interface of poultry and wild birds. These models are easily adaptable, have broad utility to both disease and conservation needs, and will be available to the scientific community for advanced modeling applications.

  12. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  13. Development of a Test to Evaluate Aerothermal Response of Materials to Hypersonic Flow Using a Scramjet Wind Tunnel (Postprint)

    DTIC Science & Technology

    2010-05-01

    SCRAMJET WIND TUNNEL (POSTPRINT) 5a. CONTRACT NUMBER FA8650-10-D-5226-0002 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62102F 6. AUTHOR(S...prototype scramjet engine as a wind tunnel . A sample holder was designed using combustion fluid dynamics results as inputs into structural models. The...Z39-18 Development of a Test to Evaluate Aerothermal Response of Materials to Hypersonic Flow Using a Scramjet Wind Tunnel Triplicane A

  14. Using Landsat data to estimate evapotranspiration of winter wheat

    NASA Technical Reports Server (NTRS)

    Kanemasu, E. T.; Heilman, J. L.; Bagley, J. O.; Powers, W. L.

    1977-01-01

    Results obtained from an evapotranspiration model as applied to Kansas winter wheatfields were compared with results determined by a weighing lysimeter, and the standard deviation was found to be less than 0.5 mm/day (however, the 95% confidence interval was between plus and minus 0.2 mm/day). Model inputs are solar radiation, temperature, precipitation, and leaf area index; an equation was developed to estimate the leaf area index from Landsat data. The model provides estimates of transpiration, evaporation, and soil moisture.

  15. Complex time series analysis of PM10 and PM2.5 for a coastal site using artificial neural network modelling and k-means clustering

    NASA Astrophysics Data System (ADS)

    Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.

    2014-09-01

    This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p < 0.005) improvement in the performance of the ANN time series model and also showed better performance in picking up high concentrations. For the presented case study, the correlation coefficient between observed and predicted concentrations improved from 0.77 to 0.79 for PM2.5 and from 0.63 to 0.69 for PM10 and reduced the root mean squared error (RMSE) from 5.00 to 4.74 for PM2.5 and from 6.77 to 6.34 for PM10. The techniques presented here enable the user to obtain an understanding of potential sources and their transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.

  16. Formulation d'un modele mathematique par des techniques d'estimation de parametres a partir de donnees de vol pour l'helicoptere Bell 427 et l'avion F/A-18 servant a la recherches en aeroservoelasticite

    NASA Astrophysics Data System (ADS)

    Nadeau-Beaulieu, Michel

    In this thesis, three mathematical models are built from flight test data for different aircraft design applications: a ground dynamics model for the Bell 427 helicopter, a prediction model for the rotor and engine parameters for the same helicopter type and a simulation model for the aeroelastic deflections of the F/A-18. In the ground dynamics application, the model structure is derived from physics where the normal force between the helicopter and the ground is modelled as a vertical spring and the frictional force is modelled with static and dynamic friction coefficients. The ground dynamics model coefficients are optimized to ensure that the model matches the landing data within the FAA (Federal Aviation Administration) tolerance bands for a level D flight simulator. In the rotor and engine application, rotors torques (main and tail), the engine torque and main rotor speed are estimated using a state-space model. The model inputs are nonlinear terms derived from the pilot control inputs and the helicopter states. The model parameters are identified using the subspace method and are further optimised with the Levenberg-Marquardt minimisation algorithm. The model built with the subspace method provides an excellent estimate of the outputs within the FAA tolerance bands. The F/A-18 aeroelastic state-space model is built from flight test. The research concerning this model is divided in two parts. Firstly, the deflection of a given structural surface on the aircraft following a differential ailerons control input is represented by a Multiple Inputs Single Outputs linear model whose inputs are the ailerons positions and the structural surfaces deflections. Secondly, a single state-space model is used to represent the deflection of the aircraft wings and trailing edge flaps following any control input. In this case the model is made non-linear by multiplying model inputs into higher order terms and using these terms as the inputs of the state-space equations. In both cases, the identification method is the subspace method. Most fit coefficients between the estimated and the measured signals are above 73% and most correlation coefficient are higher than 90%.

  17. Ensemble Forecasting of Coronal Mass Ejections Using the WSA-ENLIL with CONED Model

    NASA Technical Reports Server (NTRS)

    Emmons, D.; Acebal, A.; Pulkkinen, A.; Taktakishvili, A.; MacNeice, P.; Odstricil, D.

    2013-01-01

    The combination of the Wang-Sheeley-Arge (WSA) coronal model, ENLIL heliospherical model version 2.7, and CONED Model version 1.3 (WSA-ENLIL with CONED Model) was employed to form ensemble forecasts for 15 halo coronal mass ejections (halo CMEs). The input parameter distributions were formed from 100 sets of CME cone parameters derived from the CONED Model. The CONED Model used image processing along with the bootstrap approach to automatically calculate cone parameter distributions from SOHO/LASCO imagery based on techniques described by Pulkkinen et al. (2010). The input parameter distributions were used as input to WSA-ENLIL to calculate the temporal evolution of the CMEs, which were analyzed to determine the propagation times to the L1 Lagrangian point and the maximum Kp indices due to the impact of the CMEs on the Earth's magnetosphere. The Newell et al. (2007) Kp index formula was employed to calculate the maximum Kp indices based on the predicted solar wind parameters near Earth assuming two magnetic field orientations: a completely southward magnetic field and a uniformly distributed clock-angle in the Newell et al. (2007) Kp index formula. The forecasts for 5 of the 15 events had accuracy such that the actual propagation time was within the ensemble average plus or minus one standard deviation. Using the completely southward magnetic field assumption, 10 of the 15 events contained the actual maximum Kp index within the range of the ensemble forecast, compared to 9 of the 15 events when using a uniformly distributed clock angle.

  18. Propagation-of-uncertainty from contact angle and streaming potential measurements to XDLVO model assessments of membrane-colloid interactions.

    PubMed

    Muthu, Satish; Childress, Amy; Brant, Jonathan

    2014-08-15

    Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. An urban runoff model designed to inform stormwater management decisions.

    PubMed

    Beck, Nicole G; Conley, Gary; Kanner, Lisa; Mathias, Margaret

    2017-05-15

    We present an urban runoff model designed for stormwater managers to quantify runoff reduction benefits of mitigation actions that has lower input data and user expertise requirements than most commonly used models. The stormwater tool to estimate load reductions (TELR) employs a semi-distributed approach, where landscape characteristics and process representation are spatially-lumped within urban catchments on the order of 100 acres (40 ha). Hydrologic computations use a set of metrics that describe a 30-year rainfall distribution, combined with well-tested algorithms for rainfall-runoff transformation and routing to generate average annual runoff estimates for each catchment. User inputs include the locations and specifications for a range of structural best management practice (BMP) types. The model was tested in a set of urban catchments within the Lake Tahoe Basin of California, USA, where modeled annual flows matched that of the observed flows within 18% relative error for 5 of the 6 catchments and had good regional performance for a suite of performance metrics. Comparisons with continuous simulation models showed an average of 3% difference from TELR predicted runoff for a range of hypothetical urban catchments. The model usually identified the dominant BMP outflow components within 5% relative error of event-based measured flow data and simulated the correct proportionality between outflow components. TELR has been implemented as a web-based platform for use by municipal stormwater managers to inform prioritization, report program benefits and meet regulatory reporting requirements (www.swtelr.com). Copyright © 2017. Published by Elsevier Ltd.

  20. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

Top