Sample records for system modeling code

  1. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  2. Rocketdyne/Westinghouse nuclear thermal rocket engine modeling

    NASA Technical Reports Server (NTRS)

    Glass, James F.

    1993-01-01

    The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.

  3. End-to-End Modeling with the Heimdall Code to Scope High-Power Microwave Systems

    DTIC Science & Technology

    2007-06-01

    END-TO-END MODELING WITH THE HEIMDALL CODE TO SCOPE HIGH - POWER MICROWAVE SYSTEMS ∗ John A. Swegleξ Savannah River National Laboratory, 743A...describe the expert-system code HEIMDALL, which is used to model full high - power microwave systems using over 60 systems-engineering models, developed in...of our calculations of the mass of a Supersystem producing 500-MW, 15-ns output pulses in the X band for bursts of 1 s , interspersed with 10- s

  4. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  5. The adaption and use of research codes for performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    1987-05-01

    Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less

  6. Nuclear thermal propulsion engine system design analysis code development

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.

    1992-01-01

    A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.

  7. Performance analysis of optical wireless communication system based on two-fold turbo code

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Huang, Dexiu; Yuan, Xiuhua

    2005-11-01

    Optical wireless communication (OWC) is beginning to emerge in the telecommunications market as a strategy to meet last-mile demand owing to its unique combination of features. Turbo codes have an impressive near Shannon-limit error correcting performance. Twofold turbo codes have been recently introduced as the least complex member of the multifold turbo code family. In this paper, at first, we present the mathematical model of signal and optical wireless channel with fading and bit error rate model with scintillation, then we provide a new turbo code method to use in OWC system, we can obtain a better BER curse of OWC system with twofold turbo code than with common turbo code.

  8. CELCAP: A Computer Model for Cogeneration System Analysis

    NASA Technical Reports Server (NTRS)

    1985-01-01

    A description of the CELCAP cogeneration analysis program is presented. A detailed description of the methodology used by the Naval Civil Engineering Laboratory in developing the CELCAP code and the procedures for analyzing cogeneration systems for a given user are given. The four engines modeled in CELCAP are: gas turbine with exhaust heat boiler, diesel engine with waste heat boiler, single automatic-extraction steam turbine, and back-pressure steam turbine. Both the design point and part-load performances are taken into account in the engine models. The load model describes how the hourly electric and steam demand of the user is represented by 24 hourly profiles. The economic model describes how the annual and life-cycle operating costs that include the costs of fuel, purchased electricity, and operation and maintenance of engines and boilers are calculated. The CELCAP code structure and principal functions of the code are described to how the various components of the code are related to each other. Three examples of the application of the CELCAP code are given to illustrate the versatility of the code. The examples shown represent cases of system selection, system modification, and system optimization.

  9. Galen-In-Use: using artificial intelligence terminology tools to improve the linguistic coherence of a national coding system for surgical procedures.

    PubMed

    Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F

    1998-01-01

    GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.

  10. Opening up Architectures of Software-Intensive Systems: A Functional Decomposition to Support System Comprehension

    DTIC Science & Technology

    2007-10-01

    Architecture ................................................................................ 14 Figure 2. Eclipse Java Model...16 Figure 3. Eclipse Java Model at the Source Code Level...24 Figure 9. Java Source Code

  11. Centrifugal and Axial Pump Design and Off-Design Performance Prediction

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1995-01-01

    A meanline pump-flow modeling method has been developed to provide a fast capability for modeling pumps of cryogenic rocket engines. Based on this method, a meanline pump-flow code PUMPA was written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The design-point rotor efficiency and slip factors are obtained from empirical correlations to rotor-specific speed and geometry. The pump code can model axial, inducer, mixed-flow, and centrifugal pumps and can model multistage pumps in series. The rapid input setup and computer run time for this meanline pump flow code make it an effective analysis and conceptual design tool. The map-generation capabilities of the code provide the information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of PUMPA permit the user to do parametric design space exploration of candidate pump configurations and to provide head-flow maps for engine system evaluation.

  12. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  13. A Three-Phase Decision Model of Computer-Aided Coding for the Iranian Classification of Health Interventions (IRCHI).

    PubMed

    Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail

    2017-06-01

    Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research.

  14. Modeling Guidelines for Code Generation in the Railway Signaling Context

    NASA Technical Reports Server (NTRS)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these recommendations has been performed for the automotive control systems domain in order to enforce code generation [7]. The MAAB guidelines have been found profitable also in the aerospace/avionics sector [1] and they have been adopted by the MathWorks Aerospace Leadership Council (MALC). General Electric Transportation Systems (GETS) is a well known railway signaling systems manufacturer leading in Automatic Train Protection (ATP) systems technology. Inside an effort of adopting formal methods within its own development process, GETS decided to introduce system modeling by means of the MathWorks tools [2], and in 2008 chose to move to code generation. This article reports the experience performed by GETS in developing its own modeling standard through customizing the MAAB rules for the railway signaling domain and shows the result of this experience with a successful product development story.

  15. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  16. Coding response to a case-mix measurement system based on multiple diagnoses.

    PubMed

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  17. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  18. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  19. Fundamentals, current state of the development of, and prospects for further improvement of the new-generation thermal-hydraulic computational HYDRA-IBRAE/LM code for simulation of fast reactor systems

    NASA Astrophysics Data System (ADS)

    Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.

    2016-02-01

    The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.

  20. Power optimization of wireless media systems with space-time block codes.

    PubMed

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-07-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.

  1. Viewing hybrid systems as products of control systems and automata

    NASA Technical Reports Server (NTRS)

    Grossman, R. L.; Larson, R. G.

    1992-01-01

    The purpose of this note is to show how hybrid systems may be modeled as products of nonlinear control systems and finite state automata. By a hybrid system, we mean a network of consisting of continuous, nonlinear control system connected to discrete, finite state automata. Our point of view is that the automata switches between the control systems, and that this switching is a function of the discrete input symbols or letters that it receives. We show how a nonlinear control system may be viewed as a pair consisting of a bialgebra of operators coding the dynamics, and an algebra of observations coding the state space. We also show that a finite automata has a similar representation. A hybrid system is then modeled by taking suitable products of the bialgebras coding the dynamics and the observation algebras coding the state spaces.

  2. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  3. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  4. Finding Resolution for the Responsible Transparency of Economic Models in Health and Medicine.

    PubMed

    Padula, William V; McQueen, Robert Brett; Pronovost, Peter J

    2017-11-01

    The Second Panel on Cost-Effectiveness in Health and Medicine recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses has a number of questions unanswered with respect to the implementation of transparent, open source code interface for economic models. The possibility of making economic model source code could be positive and progressive for the field; however, several unintended consequences of this system should be first considered before complete implementation of this model. First, there is the concern regarding intellectual property rights that modelers have to their analyses. Second, the open source code could make analyses more accessible to inexperienced modelers, leading to inaccurate or misinterpreted results. We propose several resolutions to these concerns. The field should establish a licensing system of open source code such that the model originators maintain control of the code use and grant permissions to other investigators who wish to use it. The field should also be more forthcoming towards the teaching of cost-effectiveness analysis in medical and health services education so that providers and other professionals are familiar with economic modeling and able to conduct analyses with open source code. These types of unintended consequences need to be fully considered before the field's preparedness to move forward into an era of model transparency with open source code.

  5. Are Models Easier to Understand than Code? An Empirical Study on Comprehension of Entity-Relationship (ER) Models vs. Structured Query Language (SQL) Code

    ERIC Educational Resources Information Center

    Sanchez, Pablo; Zorrilla, Marta; Duque, Rafael; Nieto-Reyes, Alicia

    2011-01-01

    Models in Software Engineering are considered as abstract representations of software systems. Models highlight relevant details for a certain purpose, whereas irrelevant ones are hidden. Models are supposed to make system comprehension easier by reducing complexity. Therefore, models should play a key role in education, since they would ease the…

  6. ANN modeling of DNA sequences: new strategies using DNA shape code.

    PubMed

    Parbhane, R V; Tambe, S S; Kulkarni, B D

    2000-09-01

    Two new encoding strategies, namely, wedge and twist codes, which are based on the DNA helical parameters, are introduced to represent DNA sequences in artificial neural network (ANN)-based modeling of biological systems. The performance of the new coding strategies has been evaluated by conducting three case studies involving mapping (modeling) and classification applications of ANNs. The proposed coding schemes have been compared rigorously and shown to outperform the existing coding strategies especially in situations wherein limited data are available for building the ANN models.

  7. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    PubMed Central

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  8. Forecasting of construction and demolition waste in Brazil.

    PubMed

    Paz, Diogo Hf; Lafayette, Kalinny Pv

    2016-08-01

    The objective of this article is to develop a computerised tool (software) that facilitates the analysis of strategies for waste management on construction sites through the use of indicators of construction and demolition waste generation. The development involved the following steps: knowledge acquisition, structuring the system, coding and system evaluation. The step of knowledge acquisition aims to provide subsidies for the representation of them through models. In the step of structuring the system, it was presented the structuring and formalisation of knowledge for the development of the system, and has two stages: the construction of the conceptual model and the subsequent instantiation of the model. The coding system aims to implement (code) the conceptual model developed in a model played by computer (digital). The results showed that the system is very useful and applicable in construction sites, helping to improve the quality of waste management, and creating a database that will support new research. © The Author(s) 2016.

  9. A Three-Phase Decision Model of Computer-Aided Coding for the Iranian Classification of Health Interventions (IRCHI)

    PubMed Central

    Azadmanjir, Zahra; Safdari, Reza; Ghazisaeedi, Marjan; Mokhtaran, Mehrshad; Kameli, Mohammad Esmail

    2017-01-01

    Introduction: Accurate coded data in the healthcare are critical. Computer-Assisted Coding (CAC) is an effective tool to improve clinical coding in particular when a new classification will be developed and implemented. But determine the appropriate method for development need to consider the specifications of existing CAC systems, requirements for each type, our infrastructure and also, the classification scheme. Aim: The aim of the study was the development of a decision model for determining accurate code of each medical intervention in Iranian Classification of Health Interventions (IRCHI) that can be implemented as a suitable CAC system. Methods: first, a sample of existing CAC systems was reviewed. Then feasibility of each one of CAC types was examined with regard to their prerequisites for their implementation. The next step, proper model was proposed according to the structure of the classification scheme and was implemented as an interactive system. Results: There is a significant relationship between the level of assistance of a CAC system and integration of it with electronic medical documents. Implementation of fully automated CAC systems is impossible due to immature development of electronic medical record and problems in using language for medical documenting. So, a model was proposed to develop semi-automated CAC system based on hierarchical relationships between entities in the classification scheme and also the logic of decision making to specify the characters of code step by step through a web-based interactive user interface for CAC. It was composed of three phases to select Target, Action and Means respectively for an intervention. Conclusion: The proposed model was suitable the current status of clinical documentation and coding in Iran and also, the structure of new classification scheme. Our results show it was practical. However, the model needs to be evaluated in the next stage of the research. PMID:28883671

  10. Object-oriented approach for gas turbine engine simulation

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Felder, James L.

    1995-01-01

    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  11. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    NASA Astrophysics Data System (ADS)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  12. Computer code for analyzing the performance of aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.

    1985-05-01

    A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.

  13. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less

  14. Advances in modelling of condensation phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W.S.; Zaltsgendler, E.; Hanna, B.

    1997-07-01

    The physical parameters in the modelling of condensation phenomena in the CANDU reactor system codes are discussed. The experimental programs used for thermal-hydraulic code validation in the Canadian nuclear industry are briefly described. The modelling of vapour generation and in particular condensation plays a key role in modelling of postulated reactor transients. The condensation models adopted in the current state-of-the-art two-fluid CANDU reactor thermal-hydraulic system codes (CATHENA and TUF) are described. As examples of the modelling challenges faced, the simulation of a cold water injection experiment by CATHENA and the simulation of a condensation induced water hammer experiment by TUFmore » are described.« less

  15. Understanding Engagement in Dementia Through Behavior. The Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the Evidence-Based Model of Engagement-Related Behavior (EMODEB)

    PubMed Central

    Perugia, Giulia; van Berkel, Roos; Díaz-Boladeras, Marta; Català-Mallofré, Andreu; Rauterberg, Matthias; Barakova, Emilia

    2018-01-01

    Engagement in activities is of crucial importance for people with dementia. State of the art assessment techniques rely exclusively on behavior observation to measure engagement in dementia. These techniques are either too general to grasp how engagement is naturally expressed through behavior or too complex to be traced back to an overall engagement state. We carried out a longitudinal study to develop a coding system of engagement-related behavior that could tackle these issues and to create an evidence-based model of engagement to make meaning of such a coding system. Fourteen elderlies with mild to moderate dementia took part in the study. They were involved in two activities: a game-based cognitive stimulation and a robot-based free play. The coding system was developed with a mixed approach: ethographic and Laban-inspired. First, we developed two ethograms to describe the behavior of participants in the two activities in detail. Then, we used Laban Movement Analysis (LMA) to identify a common structure to the behaviors in the two ethograms and unify them in a unique coding system. The inter-rater reliability (IRR) of the coding system proved to be excellent for cognitive games (kappa = 0.78) and very good for robot play (kappa = 0.74). From the scoring of the videos, we developed an evidence-based model of engagement. This was based on the most frequent patterns of body part organization (i.e., the way body parts are connected in movement) observed during activities. Each pattern was given a meaning in terms of engagement by making reference to the literature. The model was tested using structural equation modeling (SEM). It achieved an excellent goodness of fit and all the hypothesized relations between variables were significant. We called the coding system that we developed the Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the model the Evidence-based Model of Engagement-related Behavior (EMODEB). To the best of our knowledge, the ELICSE and the EMODEB constitute the first formalization of engagement-related behavior for dementia that describes how behavior unfolds over time and what it means in terms of engagement. PMID:29881360

  16. Understanding Engagement in Dementia Through Behavior. The Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the Evidence-Based Model of Engagement-Related Behavior (EMODEB).

    PubMed

    Perugia, Giulia; van Berkel, Roos; Díaz-Boladeras, Marta; Català-Mallofré, Andreu; Rauterberg, Matthias; Barakova, Emilia

    2018-01-01

    Engagement in activities is of crucial importance for people with dementia. State of the art assessment techniques rely exclusively on behavior observation to measure engagement in dementia. These techniques are either too general to grasp how engagement is naturally expressed through behavior or too complex to be traced back to an overall engagement state. We carried out a longitudinal study to develop a coding system of engagement-related behavior that could tackle these issues and to create an evidence-based model of engagement to make meaning of such a coding system. Fourteen elderlies with mild to moderate dementia took part in the study. They were involved in two activities: a game-based cognitive stimulation and a robot-based free play. The coding system was developed with a mixed approach: ethographic and Laban-inspired. First, we developed two ethograms to describe the behavior of participants in the two activities in detail. Then, we used Laban Movement Analysis (LMA) to identify a common structure to the behaviors in the two ethograms and unify them in a unique coding system. The inter-rater reliability (IRR) of the coding system proved to be excellent for cognitive games (kappa = 0.78) and very good for robot play (kappa = 0.74). From the scoring of the videos, we developed an evidence-based model of engagement. This was based on the most frequent patterns of body part organization (i.e., the way body parts are connected in movement) observed during activities. Each pattern was given a meaning in terms of engagement by making reference to the literature. The model was tested using structural equation modeling (SEM). It achieved an excellent goodness of fit and all the hypothesized relations between variables were significant. We called the coding system that we developed the Ethographic and Laban-Inspired Coding System of Engagement (ELICSE) and the model the Evidence-based Model of Engagement-related Behavior (EMODEB). To the best of our knowledge, the ELICSE and the EMODEB constitute the first formalization of engagement-related behavior for dementia that describes how behavior unfolds over time and what it means in terms of engagement.

  17. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  18. Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.

    System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less

  19. A Monte Carlo model system for core analysis and epithermal neutron beam design at the Washington State University Radiation Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, T.D. Jr.

    1996-05-01

    The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less

  20. Tailored Codes for Small Quantum Memories

    NASA Astrophysics Data System (ADS)

    Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.

    2017-12-01

    We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.

  1. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  2. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  3. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  4. RELAP-7 Closure Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Berry, R. A.; Martineau, R. C.

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less

  5. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  6. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  7. General Electromagnetic Model for the Analysis of Complex Systems (GEMACS) Computer Code Documentation (Version 3). Volume 3, Part 4.

    DTIC Science & Technology

    1983-09-01

    6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA

  8. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandiamore » National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The cases with two-phase flow at the turbine inlet will be pursued in future work.« less

  9. Bayesian decision support for coding occupational injury data.

    PubMed

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  10. A CellML simulation compiler and code generator using ODE solving schemes

    PubMed Central

    2012-01-01

    Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065

  11. An approach to the origin of self-replicating system. I - Intermolecular interactions

    NASA Technical Reports Server (NTRS)

    Macelroy, R. D.; Coeckelenbergh, Y.; Rein, R.

    1978-01-01

    The present paper deals with the characteristics and potentialities of a recently developed computer-based molecular modeling system. Some characteristics of current coding systems are examined and are extrapolated to the apparent requirements of primitive prebiological coding systems.

  12. The development of a thermal hydraulic feedback mechanism with a quasi-fixed point iteration scheme for control rod position modeling for the TRIGSIMS-TH application

    NASA Astrophysics Data System (ADS)

    Karriem, Veronica V.

    Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.

  13. Accuracy comparison among different machine learning techniques for detecting malicious codes

    NASA Astrophysics Data System (ADS)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  14. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  15. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a: system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the ciasses of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  16. Automatic Testcase Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.

  17. An address geocoding method for improving rural spatial information infrastructure

    NASA Astrophysics Data System (ADS)

    Pan, Yuchun; Chen, Baisong; Lu, Zhou; Li, Shuhua; Zhang, Jingbo; Zhou, YanBing

    2010-11-01

    The transition of rural and agricultural management from divisional to integrated mode has highlighted the importance of data integration and sharing. Current data are mostly collected by specific department to satisfy their own needs and lake of considering on wider potential uses. This led to great difference in data format, semantic, and precision even in same area, which is a significant barrier for constructing an integrated rural spatial information system to support integrated management and decision-making. Considering the rural cadastral management system and postal zones, the paper designs a rural address geocoding method based on rural cadastral parcel. It puts forward a geocoding standard which consists of absolute position code, relative position code and extended code. It designs a rural geocoding database model, and addresses collection and update model. Then, based on the rural address geocoding model, it proposed a data model for rural agricultural resources management. The results show that the address coding based on postal code is stable and easy to memorize, two-dimensional coding based on the direction and distance is easy to be located and memorized, while extended code can enhance the extensibility and flexibility of address geocoding.

  18. A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.

    PubMed

    Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H

    2001-03-01

    The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.

  19. A High Altitude Ionization Structure and Scintillation Model.

    DTIC Science & Technology

    1979-02-19

    structure and convection model into an existing systems code. The purpose is to update estimates of the scintillation effects of the structuring of the...OSEC 37.2 Fig. 16. Isodensity contours of plasma density at t - 0 sec. The ini- tial destribution for NJNO is a gaus- sian in y, centered at y - 12.1 km...striations is a task for future work. 35 - . 5. IMPLEMENTATION INTO AN EXISTING CODE In any existing systems code that uses RANC phenomenology, there

  20. Using Modified-ISS Model to Evaluate Medication Administration Safety During Bar Code Medication Administration Implementation in Taiwan Regional Teaching Hospital.

    PubMed

    Ma, Pei-Luen; Jheng, Yan-Wun; Jheng, Bi-Wei; Hou, I-Ching

    2017-01-01

    Bar code medication administration (BCMA) could reduce medical errors and promote patient safety. This research uses modified information systems success model (M-ISS model) to evaluate nurses' acceptance to BCMA. The result showed moderate correlation between medication administration safety (MAS) to system quality, information quality, service quality, user satisfaction, and limited satisfaction.

  1. Fluid dynamic modeling of junctions in internal combustion engine inlet and exhaust systems

    NASA Astrophysics Data System (ADS)

    Chalet, David; Chesse, Pascal

    2010-10-01

    The modeling of inlet and exhaust systems of internal combustion engine is very important in order to evaluate the engine performance. This paper presents new pressure losses models which can be included in a one dimensional engine simulation code. In a first part, a CFD analysis is made in order to show the importance of the density in the modeling approach. Then, the CFD code is used, as a numerical test bench, for the pressure losses models development. These coefficients depend on the geometrical characteristics of the junction and an experimental validation is made with the use of a shock tube test bench. All the models are then included in the engine simulation code of the laboratory. The numerical calculation of unsteady compressible flow, in each pipe of the inlet and exhaust systems, is made and the calculated engine torque is compared with experimental measurements.

  2. An Infrastructure for UML-Based Code Generation Tools

    NASA Astrophysics Data System (ADS)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  3. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less

  4. The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions

    NASA Astrophysics Data System (ADS)

    Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.

    2016-01-01

    A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.

  5. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1986-01-01

    A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.

  6. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  7. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  8. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.

    This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less

  9. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  10. Assessment of Effectiveness of Geologic Isolation Systems. Variable thickness transient ground-water flow model. Volume 2. Users' manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    A system of computer codes to aid in the preparation and evaluation of ground-water model input, as well as in the computer codes and auxillary programs developed and adapted for use in modeling major ground-water aquifers is described. The ground-water model is interactive, rather than a batch-type model. Interactive models have been demonstrated to be superior to batch in the ground-water field. For example, looking through reams of numerical lists can be avoided with the much superior graphical output forms or summary type numerical output. The system of computer codes permits the flexibility to develop rapidly the model-required data filesmore » from engineering data and geologic maps, as well as efficiently manipulating the voluminous data generated. Central to these codes is the Ground-water Model, which given the boundary value problem, produces either the steady-state or transient time plane solutions. A sizeable part of the codes available provide rapid evaluation of the results. Besides contouring the new water potentials, the model allows graphical review of streamlines of flow, travel times, and detailed comparisons of surfaces or points at designated wells. Use of the graphics scopes provide immediate, but temporary displays which can be used for evaluation of input and output and which can be reproduced easily on hard copy devices, such as a line printer, Calcomp plotter and image photographs.« less

  11. Fostering Team Awareness in Earth System Modeling Communities

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Lawson, A.; Strong, S.

    2009-12-01

    Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.

  12. A Verification-Driven Approach to Traceability and Documentation for Auto-Generated Mathematical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.; Fischer, Bernd

    2009-01-01

    Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.

  13. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  14. System analysis with improved thermo-mechanical fuel rod models for modeling current and advanced LWR materials in accident scenarios

    NASA Astrophysics Data System (ADS)

    Porter, Ian Edward

    A nuclear reactor systems code has the ability to model the system response in an accident scenario based on known initial conditions at the onset of the transient. However, there has been a tendency for these codes to lack the detailed thermo-mechanical fuel rod response models needed for accurate prediction of fuel rod failure. This proposed work will couple today's most widely used steady-state (FRAPCON) and transient (FRAPTRAN) fuel rod models with a systems code TRACE for best-estimate modeling of system response in accident scenarios such as a loss of coolant accident (LOCA). In doing so, code modifications will be made to model gamma heating in LWRs during steady-state and accident conditions and to improve fuel rod thermal/mechanical analysis by allowing axial nodalization of burnup-dependent phenomena such as swelling, cladding creep and oxidation. With the ability to model both burnup-dependent parameters and transient fuel rod response, a fuel dispersal study will be conducted using a hypothetical accident scenario under both PWR and BWR conditions to determine the amount of fuel dispersed under varying conditions. Due to the fuel fragmentation size and internal rod pressure both being dependent on burnup, this analysis will be conducted at beginning, middle and end of cycle to examine the effects that cycle time can play on fuel rod failure and dispersal. Current fuel rod and system codes used by the Nuclear Regulatory Commission (NRC) are compilations of legacy codes with only commonly used light water reactor materials, Uranium Dioxide (UO2), Mixed Oxide (U/PuO 2) and zirconium alloys. However, the events at Fukushima Daiichi and Three Mile Island accident have shown the need for exploration into advanced materials possessing improved accident tolerance. This work looks to further modify the NRC codes to include silicon carbide (SiC), an advanced cladding material proposed by current DOE funded research on accident tolerant fuels (ATF). Several additional fuels will also be analyzed, including uranium nitride (UN), uranium carbide (UC) and uranium silicide (U3Si2). Focusing on the system response in an accident scenario, an emphasis is placed on the fracture mechanics of the ceramic cladding by design the fuel rods to eliminate pellet cladding mechanical interaction (PCMI). The time to failure and how much of the fuel in the reactor fails with an advanced fuel design will be analyzed and compared to the current UO2/Zircaloy design using a full scale reactor model.

  15. An Integrated Model of Cognitive Control in Task Switching

    ERIC Educational Resources Information Center

    Altmann, Erik M.; Gray, Wayne D.

    2008-01-01

    A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task. The main constraint on access to the current task code is proactive interference from old task codes. This interference and the mechanisms that…

  16. Development of the FHR advanced natural circulation analysis code and application to FHR safety analysis

    DOE PAGES

    Guo, Z.; Zweibaum, N.; Shao, M.; ...

    2016-04-19

    The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less

  17. CMCpy: Genetic Code-Message Coevolution Models in Python

    PubMed Central

    Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.

    2013-01-01

    Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367

  18. The Marriage of Residential Energy Codes and Rating Systems: Conflict Resolution or Just Conflict?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Zachary T.; Mendon, Vrushali V.

    2014-08-21

    After three decades of coexistence at a distance, model residential energy codes and residential energy rating systems have come together in the 2015 International Energy Conservation Code. At the October, 2013, International Code Council’s Public Comment Hearing, a new compliance path based on an Energy Rating Index was added to the IECC. Although not specifically named in the code, RESNET’s HERS rating system is the likely candidate Index for most jurisdictions. While HERS has been a mainstay in various beyond-code programs for many years, its direct incorporation into the most popular model energy code raises questions about the equivalence ofmore » a HERS-based compliance path and the traditional IECC performance compliance path, especially because the two approaches use different efficiency metrics, are governed by different simulation rules, and have different scopes with regard to energy impacting house features. A detailed simulation analysis of more than 15,000 house configurations reveals a very large range of HERS Index values that achieve equivalence with the IECC’s performance path. This paper summarizes the results of that analysis and evaluates those results against the specific Energy Rating Index values required by the 2015 IECC. Based on the home characteristics most likely to result in disparities between HERS-based compliance and performance path compliance, potential impacts on the compliance process, state and local adoption of the new code, energy efficiency in the next generation of homes subject to this new code, and future evolution of model code formats are discussed.« less

  19. Psychometric Properties of the System for Coding Couples’ Interactions in Therapy - Alcohol

    PubMed Central

    Owens, Mandy D.; McCrady, Barbara S.; Borders, Adrienne Z.; Brovko, Julie M.; Pearson, Matthew R.

    2014-01-01

    Few systems are available for coding in-session behaviors for couples in therapy. Alcohol Behavior Couples Therapy (ABCT) is an empirically supported treatment, but little is known about its mechanisms of behavior change. In the current study, an adapted version of the Motivational Interviewing for Significant Others coding system was developed into the System for Coding Couples’ Interactions in Therapy – Alcohol (SCCIT-A), which was used to code couples’ interactions and behaviors during ABCT. Results showed good inter-rater reliability of the SCCIT-A and provided evidence that the SCCIT-A may be a promising measure for understanding couples in therapy. A three factor model of the SCCIT-A was examined (Positive, Negative, and Change Talk/Counter-Change Talk) using a confirmatory factor analysis, but model fit was poor. Due to poor model fit, ratios were computed for Positive/Negative ratings and for Change Talk/Counter-Change Talk codes based on previous research in the couples and Motivational Interviewing literature. Post-hoc analyses examined correlations between specific SCCIT-A codes and baseline characteristics and indicated some concurrent validity. Correlations were run between ratios and baseline characteristics; ratios may be an alternative to using the factors from the SCCIT-A. Reliability and validity analyses suggest that the SCCIT-A has the potential to be a useful measure for coding in-session behaviors of both partners in couples therapy and could be used to identify mechanisms of behavior change for ABCT. Additional research is needed to improve the reliability of some codes and to further develop the SCCIT-A and other measures of couples’ interactions in therapy. PMID:25528049

  20. Model-Driven Engineering: Automatic Code Generation and Beyond

    DTIC Science & Technology

    2015-03-01

    and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional

  1. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  2. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  3. Portal of medical data models: information infrastructure for medical research and healthcare.

    PubMed

    Dugas, Martin; Neuhaus, Philipp; Meidt, Alexandra; Doods, Justin; Storck, Michael; Bruland, Philipp; Varghese, Julian

    2016-01-01

    Information systems are a key success factor for medical research and healthcare. Currently, most of these systems apply heterogeneous and proprietary data models, which impede data exchange and integrated data analysis for scientific purposes. Due to the complexity of medical terminology, the overall number of medical data models is very high. At present, the vast majority of these models are not available to the scientific community. The objective of the Portal of Medical Data Models (MDM, https://medical-data-models.org) is to foster sharing of medical data models. MDM is a registered European information infrastructure. It provides a multilingual platform for exchange and discussion of data models in medicine, both for medical research and healthcare. The system is developed in collaboration with the University Library of Münster to ensure sustainability. A web front-end enables users to search, view, download and discuss data models. Eleven different export formats are available (ODM, PDF, CDA, CSV, MACRO-XML, REDCap, SQL, SPSS, ADL, R, XLSX). MDM contents were analysed with descriptive statistics. MDM contains 4387 current versions of data models (in total 10,963 versions). 2475 of these models belong to oncology trials. The most common keyword (n = 3826) is 'Clinical Trial'; most frequent diseases are breast cancer, leukemia, lung and colorectal neoplasms. Most common languages of data elements are English (n = 328,557) and German (n = 68,738). Semantic annotations (UMLS codes) are available for 108,412 data items, 2453 item groups and 35,361 code list items. Overall 335,087 UMLS codes are assigned with 21,847 unique codes. Few UMLS codes are used several thousand times, but there is a long tail of rarely used codes in the frequency distribution. Expected benefits of the MDM portal are improved and accelerated design of medical data models by sharing best practice, more standardised data models with semantic annotation and better information exchange between information systems, in particular Electronic Data Capture (EDC) and Electronic Health Records (EHR) systems. Contents of the MDM portal need to be further expanded to reach broad coverage of all relevant medical domains. Database URL: https://medical-data-models.org. © The Author(s) 2016. Published by Oxford University Press.

  4. Modeling and simulation of CANDU reactor and its regulating system

    NASA Astrophysics Data System (ADS)

    Javidnia, Hooman

    Analytical computer codes are indispensable tools in design, optimization, and control of nuclear power plants. Numerous codes have been developed to perform different types of analyses related to the nuclear power plants. A large number of these codes are designed to perform safety analyses. In the context of safety analyses, the control system is often neglected. Although there are good reasons for such a decision, that does not mean that the study of control systems in the nuclear power plants should be neglected altogether. In this thesis, a proof of concept code is developed as a tool that can be used in the design. optimization. and operation stages of the control system. The main objective in the design of this computer code is providing a tool that is easy to use by its target audience and is capable of producing high fidelity results that can be trusted to design the control system and optimize its performance. Since the overall plant control system covers a very wide range of processes, in this thesis the focus has been on one particular module of the the overall plant control system, namely, the reactor regulating system. The center of the reactor regulating system is the CANDU reactor. A nodal model for the reactor is used to represent the spatial neutronic kinetics of the core. The nodal model produces better results compared to the point kinetics model which is often used in the design and analysis of control system for nuclear reactors. The model can capture the spatial effects to some extent. although it is not as detailed as the finite difference methods. The criteria for choosing a nodal model of the core are: (1) the model should provide more detail than point kinetics and capture spatial effects, (2) it should not be too complex or overly detailed to slow down the simulation and provide details that are extraneous or unnecessary for a control engineer. Other than the reactor itself, there are auxiliary models that describe dynamics of different phenomena related to the transfer of the energy from the core. The main function of the reactor regulating system is to control the power of the reactor. This is achieved by using a set of detectors. reactivity devices. and digital control algorithms. Three main reactivity devices that are activated during short-term or intermediate-term transients are modeled in this thesis. The main elements of the digital control system are implemented in accordance to the program specifications for the actual control system in CANDU reactors. The simulation results are validated against requirements of the reactor regulating system. actual plant data. and pre-validated data from other computer codes. The validation process shows that the simulation results can be trusted in making engineering decisions regarding the reactor regulating system and prediction of the system performance in response to upset conditions or disturbances. KEYWORDS: CANDU reactors. reactor regulating system. nodal model. spatial kinetics. reactivity devices. simulation.

  5. Village power options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lilienthal, P.

    1997-12-01

    This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less

  6. PV_LIB Toolbox v. 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.

  7. co2amp: A software program for modeling the dynamics of ultrashort pulses in optical systems with CO 2 amplifiers

    DOE PAGES

    Polyanskiy, Mikhail N.

    2015-01-01

    We describe a computer code for simulating the amplification of ultrashort mid-infrared laser pulses in CO 2 amplifiers and their propagation through arbitrary optical systems. This code is based on a comprehensive model that includes an accurate consideration of the CO 2 active medium and a physical optics propagation algorithm, and takes into account the interaction of the laser pulse with the material of the optical elements. Finally, the application of the code for optimizing an isotopic regenerative amplifier is described.

  8. Automatic mathematical modeling for space application

    NASA Technical Reports Server (NTRS)

    Wang, Caroline K.

    1987-01-01

    A methodology for automatic mathematical modeling is described. The major objective is to create a very friendly environment for engineers to design, maintain and verify their model and also automatically convert the mathematical model into FORTRAN code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine simulation mathematical model called Propulsion System Automatic Modeling (PSAM). PSAM provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. PSAM contains an initial set of component process elements for the Space Shuttle Main Engine simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. PSAM is then able to automatically generate the model and the FORTRAN code. A future goal is to download the FORTRAN code to the VAX/VMS system for conventional computation.

  9. Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes

    NASA Technical Reports Server (NTRS)

    Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.

    1989-01-01

    The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.

  10. Performance Analysis and Optimization on the UCLA Parallel Atmospheric General Circulation Model Code

    NASA Technical Reports Server (NTRS)

    Lou, John; Ferraro, Robert; Farrara, John; Mechoso, Carlos

    1996-01-01

    An analysis is presented of several factors influencing the performance of a parallel implementation of the UCLA atmospheric general circulation model (AGCM) on massively parallel computer systems. Several modificaitons to the original parallel AGCM code aimed at improving its numerical efficiency, interprocessor communication cost, load-balance and issues affecting single-node code performance are discussed.

  11. National Combustion Code: Parallel Performance

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2001-01-01

    This report discusses the National Combustion Code (NCC). The NCC is an integrated system of codes for the design and analysis of combustion systems. The advanced features of the NCC meet designers' requirements for model accuracy and turn-around time. The fundamental features at the inception of the NCC were parallel processing and unstructured mesh. The design and performance of the NCC are discussed.

  12. Moving from Batch to Field Using the RT3D Reactive Transport Modeling System

    NASA Astrophysics Data System (ADS)

    Clement, T. P.; Gautam, T. R.

    2002-12-01

    The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.

  13. Reconciliation of international administrative coding systems for comparison of colorectal surgery outcome.

    PubMed

    Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O

    2014-07-01

    Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.

  14. RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H

    2010-06-01

    ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less

  15. COBRA-SFS thermal-hydraulic analysis code for spent fuel storage and transportation casks: Models and methods

    DOE PAGES

    Michener, Thomas E.; Rector, David R.; Cuta, Judith M.

    2017-09-01

    COBRA-SFS, a thermal-hydraulics code developed for steady-state and transient analysis of multi-assembly spent-fuel storage and transportation systems, has been incorporated into the Used Nuclear Fuel-Storage, Transportation and Disposal Analysis Resource and Data System tool as a module devoted to spent fuel package thermal analysis. This paper summarizes the basic formulation of the equations and models used in the COBRA-SFS code, showing that COBRA-SFS fully captures the important physical behavior governing the thermal performance of spent fuel storage systems, with internal and external natural convection flow patterns, and heat transfer by convection, conduction, and thermal radiation. Of particular significance is themore » capability for detailed thermal radiation modeling within the fuel rod array.« less

  16. COBRA-SFS thermal-hydraulic analysis code for spent fuel storage and transportation casks: Models and methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michener, Thomas E.; Rector, David R.; Cuta, Judith M.

    COBRA-SFS, a thermal-hydraulics code developed for steady-state and transient analysis of multi-assembly spent-fuel storage and transportation systems, has been incorporated into the Used Nuclear Fuel-Storage, Transportation and Disposal Analysis Resource and Data System tool as a module devoted to spent fuel package thermal analysis. This paper summarizes the basic formulation of the equations and models used in the COBRA-SFS code, showing that COBRA-SFS fully captures the important physical behavior governing the thermal performance of spent fuel storage systems, with internal and external natural convection flow patterns, and heat transfer by convection, conduction, and thermal radiation. Of particular significance is themore » capability for detailed thermal radiation modeling within the fuel rod array.« less

  17. Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.

    1999-09-01

    A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less

  18. 76 FR 58857 - Privacy Act of 1974: System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-22

    ... Management, Room W12-140, 1200 New Jersey Ave., SE., Washington, DC 20590. Instructions: All submissions... system of records to the Office of Management and Budget and to Congress. SYSTEM OF RECORDS DOT/ALL-23.... [cir] Aircraft model code. [cir] Aircraft style code. [cir] Aircraft tail number. Attachment: [cir...

  19. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  20. Analysis of airborne antenna systems using geometrical theory of diffraction and moment method computer codes

    NASA Technical Reports Server (NTRS)

    Hartenstein, Richard G., Jr.

    1985-01-01

    Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.

  1. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  2. A Working Model for the System Alumina-Magnesia.

    DTIC Science & Technology

    1983-05-01

    Several regions in the resulting diagram appear rather uncertain: the liquidus ’National bureau of StandaTds. JANAF Thermochemical Tables, by D. R. Stull ...Code 131) 1 Naval Ordnance Station, Indian Head (Technical Library) 29 Naval Postgraduate School. Monterey Code 012, Dean of Research (1) Code 06... Dean of Science and Engineering (1) Code 1424. Library - Technical Reports (2) Code 33. Weapons Engineering Program Office (1) Code 61. Chairman

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  4. ASTROP2-LE: A Mistuned Aeroelastic Analysis System Based on a Two Dimensional Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral

    2002-01-01

    An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.

  5. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  6. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  7. A Thermal Management Systems Model for the NASA GTX RBCC Concept

    NASA Technical Reports Server (NTRS)

    Traci, Richard M.; Farr, John L., Jr.; Laganelli, Tony; Walker, James (Technical Monitor)

    2002-01-01

    The Vehicle Integrated Thermal Management Analysis Code (VITMAC) was further developed to aid the analysis, design, and optimization of propellant and thermal management concepts for advanced propulsion systems. The computational tool is based on engineering level principles and models. A graphical user interface (GUI) provides a simple and straightforward method to assess and evaluate multiple concepts before undertaking more rigorous analysis of candidate systems. The tool incorporates the Chemical Equilibrium and Applications (CEA) program and the RJPA code to permit heat transfer analysis of both rocket and air breathing propulsion systems. Key parts of the code have been validated with experimental data. The tool was specifically tailored to analyze rocket-based combined-cycle (RBCC) propulsion systems being considered for space transportation applications. This report describes the computational tool and its development and verification for NASA GTX RBCC propulsion system applications.

  8. SAM Theory Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    The System Analysis Module (SAM) is an advanced and modern system analysis tool being developed at Argonne National Laboratory under the U.S. DOE Office of Nuclear Energy’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM development aims for advances in physical modeling, numerical methods, and software engineering to enhance its user experience and usability for reactor transient analyses. To facilitate the code development, SAM utilizes an object-oriented application framework (MOOSE), and its underlying meshing and finite-element library (libMesh) and linear and non-linear solvers (PETSc), to leverage modern advanced software environments and numerical methods. SAM focuses on modeling advanced reactormore » concepts such as SFRs (sodium fast reactors), LFRs (lead-cooled fast reactors), and FHRs (fluoride-salt-cooled high temperature reactors) or MSRs (molten salt reactors). These advanced concepts are distinguished from light-water reactors in their use of single-phase, low-pressure, high-temperature, and low Prandtl number (sodium and lead) coolants. As a new code development, the initial effort has been focused on modeling and simulation capabilities of heat transfer and single-phase fluid dynamics responses in Sodium-cooled Fast Reactor (SFR) systems. The system-level simulation capabilities of fluid flow and heat transfer in general engineering systems and typical SFRs have been verified and validated. This document provides the theoretical and technical basis of the code to help users understand the underlying physical models (such as governing equations, closure models, and component models), system modeling approaches, numerical discretization and solution methods, and the overall capabilities in SAM. As the code is still under ongoing development, this SAM Theory Manual will be updated periodically to keep it consistent with the state of the development.« less

  9. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford

    2018-04-01

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.

  10. Intrasystem Analysis Program (IAP) code summaries

    NASA Astrophysics Data System (ADS)

    Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.

    1983-05-01

    This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.

  11. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  12. Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Samuel; Baker, Gavin Matthew; Gamell, Marc

    2015-10-01

    Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leadingmore » candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.« less

  13. Coded spread spectrum digital transmission system design study

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.

    1974-01-01

    Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.

  14. Airport-Noise Levels and Annoyance Model (ALAMO) system's reference manual

    NASA Technical Reports Server (NTRS)

    Deloach, R.; Donaldson, J. L.; Johnson, M. J.

    1986-01-01

    The airport-noise levels and annoyance model (ALAMO) is described in terms of the constituent modules, the execution of ALAMO procedure files, necessary for system execution, and the source code documentation associated with code development at Langley Research Center. The modules constituting ALAMO are presented both in flow graph form, and through a description of the subroutines and functions that comprise them.

  15. A Standard-Driven Data Dictionary for Data Harmonization of Heterogeneous Datasets in Urban Geological Information Systems

    NASA Astrophysics Data System (ADS)

    Liu, G.; Wu, C.; Li, X.; Song, P.

    2013-12-01

    The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.

  16. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)

  17. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    PubMed

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  18. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models

    PubMed Central

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2016-01-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437

  19. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  20. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  1. Development of an object-oriented ORIGEN for advanced nuclear fuel modeling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skutnik, S.; Havloej, F.; Lago, D.

    2013-07-01

    The ORIGEN package serves as the core depletion and decay calculation module within the SCALE code system. A recent major re-factor to the ORIGEN code architecture as part of an overall modernization of the SCALE code system has both greatly enhanced its maintainability as well as afforded several new capabilities useful for incorporating depletion analysis into other code frameworks. This paper will present an overview of the improved ORIGEN code architecture (including the methods and data structures introduced) as well as current and potential future applications utilizing the new ORIGEN framework. (authors)

  2. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  3. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  4. Modeling Terrorism Risk to the Air Transportation System: An Independent Assessment of TSA’s Risk Management Analysis Tool and Associated Methods

    DTIC Science & Technology

    2012-01-01

    our own work for this discussion. DoD Instruction 5000.61 defines model validation as “the pro - cess of determining the degree to which a model and its... determined that RMAT is highly con - crete code, potentially leading to redundancies in the code itself and making RMAT more difficult to maintain...system con - ceptual models valid, and are the data used to support them adequate? (Chapters Two and Three) 2. Are the sources and methods for populating

  5. Simulink Model of the Ares I Upper Stage Main Propulsion System

    NASA Technical Reports Server (NTRS)

    Burchett, Bradley T.

    2008-01-01

    A numerical model of the Ares I upper stage main propulsion system is formulated based on first principles. Equation's are written as non-linear ordinary differential equations. The GASP fortran code is used to compute thermophysical properties of the working fluids. Complicated algebraic constraints are numerically solved. The model is implemented in Simulink and provides a rudimentary simulation of the time history of important pressures and temperatures during re-pressurization, boost and upper stage firing. The model is validated against an existing reliable code, and typical results are shown.

  6. Hybrid 3D visualization of the chest and virtual endoscopy of the tracheobronchial system: possibilities and limitations of clinical application.

    PubMed

    Seemann, M D; Claussen, C D

    2001-06-01

    A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.

  7. Modeling transonic aerodynamic response using nonlinear systems theory for use with modern control theory

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    The presentation begins with a brief description of the motivation and approach that has been taken for this research. This will be followed by a description of the Volterra Theory of Nonlinear Systems and the CAP-TSD code which is an aeroelastic, transonic CFD (Computational Fluid Dynamics) code. The application of the Volterra theory to a CFD model and, more specifically, to a CAP-TSD model of a rectangular wing with a NACA 0012 airfoil section will be presented.

  8. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  9. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  10. Dynamic Burning Effects in the Combustion of Solid Propellants with Cracks, and the Use of Granular Bed Combustion Models

    DTIC Science & Technology

    1980-12-01

    Detachment, White Oak Laboratory, Silver Spring Code 240, Sigmund Jacobs (1) G. B. Wilmot (1) 1 Naval Underwater Systems Center, Newport (Code 5B331...Models by Kenneth K. Kuo and Mridul Kumar Systems Associates DTIC Pennsylvanir State University ELECTE for the APR 8 1981 Research Department B...ACTIVTY OF THE NAVAL MATERIAL COMMAND FOREWORD This is the final report for a research program conducted by Systems Associates, Pennsylvania State

  11. Logistics Support Analysis Techniques Guide

    DTIC Science & Technology

    1985-03-15

    LANGUAGE (DATA RECORDS) FORTRAN CDC 6600 D&V FSD P/D A H REMA-RKS: Program n-s-ists of F PLIATIffIONS, approx 4000 line of coding , 3 Safegard, AN/FSC... FORTRAN IV -EW-RAK9-- The model consz.sts of IT--k-LIC- I-U-0NS: approximately 367 lines of SiNCGARS, PERSHING II coding . %.’. ~ LSA TASK INTERFACE...system supported by Computer’ Systems Command. The current version of LADEN is coded totally in FORTRAN 󈧕 for virtual memory operating system

  12. Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors

    DOE PAGES

    Epiney, A.; Canepa, S.; Zerkak, O.; ...

    2016-11-02

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  13. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  14. Feasibility of self-correcting quantum memory and thermal stability of topological order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Beni, E-mail: rouge@mit.edu

    2011-10-15

    Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that themore » model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: > We define a class of physically realizable quantum codes. > We determine their coding and physical properties completely. > We establish the connection between topological order and self-correcting memory. > We find they do not work as self-correcting quantum memory. > We find they do not have thermally stable topological order.« less

  15. Model-based software design

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  16. Adaptive Transmission and Channel Modeling for Frequency Hopping Communications

    DTIC Science & Technology

    2009-09-21

    proposed adaptive transmission method has much greater system capacity than conventional non-adaptive MC direct- sequence ( DS )- CDMA system. • We...several mobile radio systems. First, a new improved allocation algorithm was proposed for multicarrier code-division multiple access (MC- CDMA ) system...Multicarrier code-division multiple access (MC- CDMA ) system with adaptive frequency hopping (AFH) has attracted attention of researchers due to its

  17. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    desirable that the code can be run on a Windows operating system on the laptop, desktop, or workstation. The focus on Windows machines allows for...transition to such systems as operated on the Navy-Marine Corp Internet (NMCI). For each code the initial cost and yearly maintenance are identified...suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports

  18. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  19. Systems, methods and apparatus for modeling, specifying and deploying policies in autonomous and autonomic systems using agent-oriented software engineering

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy (Inventor); Hinchey, Michael G. (Inventor); Penn, Joaquin (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments, an agent-oriented specification modeled with MaCMAS, is analyzed, flaws in the agent-oriented specification modeled with MaCMAS are corrected, and an implementation is derived from the corrected agent-oriented specification. Described herein are systems, method and apparatus that produce fully (mathematically) tractable development of agent-oriented specification(s) modeled with methodology fragment for analyzing complex multiagent systems (MaCMAS) and policies for autonomic systems from requirements through to code generation. The systems, method and apparatus described herein are illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming systems, method and apparatus described herein may provide faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.

  20. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  1. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  2. ETF system code: composition and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, R.L.; Wu, K.F.

    1980-01-01

    A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies,more » such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system.« less

  3. Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Ashe, Thomas L.; Otting, William D.

    1993-01-01

    The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.

  4. Entropy Production during Fatigue as a Criterion for Failure. The Critical Entropy Threshold: A Mathematical Model for Fatigue.

    DTIC Science & Technology

    1983-08-15

    Measurement of Material Damping," Experimental Mechanics, 297-302 (Aug 1977). 4. Feltner, C. E., and J. D. Morrow, " Microplastic Strain Hysteresis Energy as...Code OOKB, CP5, Room 606 Washington, DC 20360 Mr. Richard R. Graham, II Code 5243, Bldg. NC4 Naval Sea Systems Command "* Washington, DC 20362 Mr. Al...Harbage, Jr. Code 2723 DTNSRDC Annapolis, MD 21402 L’r. Martih Kandl Code 5231 Naval Sea Systems Command *i Washington, DC 20362 S. Karpe David W

  5. DSN telemetry system performance with convolutionally code data

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.

    1975-01-01

    The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.

  6. Learning of spatio-temporal codes in a coupled oscillator system.

    PubMed

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  7. A Model-Driven Architecture Approach for Modeling, Specifying and Deploying Policies in Autonomous and Autonomic Systems

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel

    2006-01-01

    Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.

  8. Advances in Engineering Software for Lift Transportation Systems

    NASA Astrophysics Data System (ADS)

    Kazakoff, Alexander Borisoff

    2012-03-01

    In this paper an attempt is performed at computer modelling of ropeway ski lift systems. The logic in these systems is based on a travel form between the two terminals, which operates with high capacity cabins, chairs, gondolas or draw-bars. Computer codes AUTOCAD, MATLAB and Compaq-Visual Fortran - version 6.6 are used in the computer modelling. The rope systems computer modelling is organized in two stages in this paper. The first stage is organization of the ground relief profile and a design of the lift system as a whole, according to the terrain profile and the climatic and atmospheric conditions. The ground profile is prepared by the geodesists and is presented in an AUTOCAD view. The next step is the design of the lift itself which is performed by programmes using the computer code MATLAB. The second stage of the computer modelling is performed after the optimization of the co-ordinates and the lift profile using the computer code MATLAB. Then the co-ordinates and the parameters are inserted into a program written in Compaq Visual Fortran - version 6.6., which calculates 171 lift parameters, organized in 42 tables. The objective of the work presented in this paper is an attempt at computer modelling of the design and parameters derivation of the rope way systems and their computer variation and optimization.

  9. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    NASA Astrophysics Data System (ADS)

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  10. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE PAGES

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...

    2018-02-26

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  11. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  12. Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coryell, E.W.; Siefken, L.J.; Harvego, E.A.

    1997-07-01

    The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less

  13. Modeling Planet-Building Stellar Disks with Radiative Transfer Code

    NASA Astrophysics Data System (ADS)

    Swearingen, Jeremy R.; Sitko, Michael L.; Whitney, Barbara; Grady, Carol A.; Wagner, Kevin Robert; Champney, Elizabeth H.; Johnson, Alexa N.; Warren, Chelsea C.; Russell, Ray W.; Hammel, Heidi B.; Lisse, Casey M.; Cure, Michel; Kraus, Stefan; Fukagawa, Misato; Calvet, Nuria; Espaillat, Catherine; Monnier, John D.; Millan-Gabet, Rafael; Wilner, David J.

    2015-01-01

    Understanding the nature of the many planetary systems found outside of our own solar system cannot be completed without knowledge of the beginnings these systems. By detecting planets in very young systems and modeling the disks of material around stars from which they form, we can gain a better understanding of planetary origin and evolution. The efforts presented here have been in modeling two pre-transitional disk systems using a radiative transfer code. With the first of these systems, V1247 Ori, a model that fits the spectral energy distribution (SED) well and whose parameters are consistent with existing interferometry data (Kraus et al 2013) has been achieved. The second of these two systems, SAO 206462, has presented a different set of challenges but encouraging SED agreement between the model and known data gives hope that the model can produce images that can be used in future interferometry work. This work was supported by NASA ADAP grant NNX09AC73G, and the IR&D program at The Aerospace Corporation.

  14. The Therapeutic Collaboration in Life Design Counselling: The Case of Ryan

    ERIC Educational Resources Information Center

    do Céu Taveira, Maria; Ribeiro, Eugénia; Cardoso, Paulo; Silva, Filipa

    2017-01-01

    This study examined the therapeutic collaboration in a case of Life Design Counseling (LDC) with narrative change and positive career outcomes. The therapeutic collaboration-change model and correspondent coding system were used to intensively study the helping relationship throughout three sessions of LDC. The collaboration coding system enables…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Z.; Zweibaum, N.; Shao, M.

    The University of California, Berkeley (UCB) is performing thermal hydraulics safety analysis to develop the technical basis for design and licensing of fluoride-salt-cooled, high-temperature reactors (FHRs). FHR designs investigated by UCB use natural circulation for emergency, passive decay heat removal when normal decay heat removal systems fail. The FHR advanced natural circulation analysis (FANCY) code has been developed for assessment of passive decay heat removal capability and safety analysis of these innovative system designs. The FANCY code uses a one-dimensional, semi-implicit scheme to solve for pressure-linked mass, momentum and energy conservation equations. Graph theory is used to automatically generate amore » staggered mesh for complicated pipe network systems. Heat structure models have been implemented for three types of boundary conditions (Dirichlet, Neumann and Robin boundary conditions). Heat structures can be composed of several layers of different materials, and are used for simulation of heat structure temperature distribution and heat transfer rate. Control models are used to simulate sequences of events or trips of safety systems. A proportional-integral controller is also used to automatically make thermal hydraulic systems reach desired steady state conditions. A point kinetics model is used to model reactor kinetics behavior with temperature reactivity feedback. The underlying large sparse linear systems in these models are efficiently solved by using direct and iterative solvers provided by the SuperLU code on high performance machines. Input interfaces are designed to increase the flexibility of simulation for complicated thermal hydraulic systems. In conclusion, this paper mainly focuses on the methodology used to develop the FANCY code, and safety analysis of the Mark 1 pebble-bed FHR under development at UCB is performed.« less

  16. Comparative study between single core model and detail core model of CFD modelling on reactor core cooling behaviour

    NASA Astrophysics Data System (ADS)

    Darmawan, R.

    2018-01-01

    Nuclear power industry is facing uncertainties since the occurrence of the unfortunate accident at Fukushima Daiichi Nuclear Power Plant. The issue of nuclear power plant safety becomes the major hindrance in the planning of nuclear power program for new build countries. Thus, the understanding of the behaviour of reactor system is very important to ensure the continuous development and improvement on reactor safety. Throughout the development of nuclear reactor technology, investigation and analysis on reactor safety have gone through several phases. In the early days, analytical and experimental methods were employed. For the last four decades 1D system level codes were widely used. The continuous development of nuclear reactor technology has brought about more complex system and processes of nuclear reactor operation. More detailed dimensional simulation codes are needed to assess these new reactors. Recently, 2D and 3D system level codes such as CFD are being explored. This paper discusses a comparative study on two different approaches of CFD modelling on reactor core cooling behaviour.

  17. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  18. National Combustion Code Parallel Performance Enhancements

    NASA Technical Reports Server (NTRS)

    Quealy, Angela; Benyo, Theresa (Technical Monitor)

    2002-01-01

    The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.

  19. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  20. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  1. Analytical modeling of intumescent coating thermal protection system in a JP-5 fuel fire environment

    NASA Technical Reports Server (NTRS)

    Clark, K. J.; Shimizu, A. B.; Suchsland, K. E.; Moyer, C. B.

    1974-01-01

    The thermochemical response of Coating 313 when exposed to a fuel fire environment was studied to provide a tool for predicting the reaction time. The existing Aerotherm Charring Material Thermal Response and Ablation (CMA) computer program was modified to treat swelling materials. The modified code is now designated Aerotherm Transient Response of Intumescing Materials (TRIM) code. In addition, thermophysical property data for Coating 313 were analyzed and reduced for use in the TRIM code. An input data sensitivity study was performed, and performance tests of Coating 313/steel substrate models were carried out. The end product is a reliable computational model, the TRIM code, which was thoroughly validated for Coating 313. The tasks reported include: generation of input data, development of swell model and implementation in TRIM code, sensitivity study, acquisition of experimental data, comparisons of predictions with data, and predictions with intermediate insulation.

  2. Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I

    NASA Technical Reports Server (NTRS)

    Wey, Thomas Changju; Liu, Nan-suey

    2011-01-01

    This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.

  3. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  4. Auto Code Generation for Simulink-Based Attitude Determination Control System

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  5. Analysis of SMA Hybrid Composite Structures using Commercial Codes

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2004-01-01

    A thermomechanical model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures has been recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilevered beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilevered beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  6. Automatic mathematical modeling for real time simulation system

    NASA Technical Reports Server (NTRS)

    Wang, Caroline; Purinton, Steve

    1988-01-01

    A methodology for automatic mathematical modeling and generating simulation models is described. The models will be verified by running in a test environment using standard profiles with the results compared against known results. The major objective is to create a user friendly environment for engineers to design, maintain, and verify their model and also automatically convert the mathematical model into conventional code for conventional computation. A demonstration program was designed for modeling the Space Shuttle Main Engine Simulation. It is written in LISP and MACSYMA and runs on a Symbolic 3670 Lisp Machine. The program provides a very friendly and well organized environment for engineers to build a knowledge base for base equations and general information. It contains an initial set of component process elements for the Space Shuttle Main Engine Simulation and a questionnaire that allows the engineer to answer a set of questions to specify a particular model. The system is then able to automatically generate the model and FORTRAN code. The future goal which is under construction is to download the FORTRAN code to VAX/VMS system for conventional computation. The SSME mathematical model will be verified in a test environment and the solution compared with the real data profile. The use of artificial intelligence techniques has shown that the process of the simulation modeling can be simplified.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki

    A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integratedmore » into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.« less

  8. A tactile paging system for deaf-blind people, phase 1. [human factors engineering of bioinstrumentation

    NASA Technical Reports Server (NTRS)

    Baer, J. A.

    1976-01-01

    A tactile paging system for deaf-blind people has been brought from the concept stage to the development of a first model. The model consists of a central station that transmits coded information via radio link to an on-body (i.e., worn on the wrist) receiving unit, the output from which is a coded vibrotactile signal. The model is a combination of commercially available equipment, customized electronic circuits, and electromechanical transducers. The paging system facilitates communication to deaf-blind clients in an institutional environment as an aid in their training and other activities. Several subunits of the system were individually developed, tested, and integrated into an operating system ready for experimentation and evaluation. The operation and characteristics of the system are described and photographs are shown.

  9. Interfacing a General Purpose Fluid Network Flow Program with the SINDA/G Thermal Analysis Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Popok, Daniel

    1999-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program Systems Improved Numerical Differencing Analyzer/Gaski (SINDA/G). The flow code, Generalized Fluid System Simulation Program (GFSSP), is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasi-steady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  10. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  11. Matrix-Product-State Algorithm for Finite Fractional Quantum Hall Systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhao; Bhatt, R. N.

    2015-09-01

    Exact diagonalization is a powerful tool to study fractional quantum Hall (FQH) systems. However, its capability is limited by the exponentially increasing computational cost. In order to overcome this difficulty, density-matrix-renormalization-group (DMRG) algorithms were developed for much larger system sizes. Very recently, it was realized that some model FQH states have exact matrix-product-state (MPS) representation. Motivated by this, here we report a MPS code, which is closely related to, but different from traditional DMRG language, for finite FQH systems on the cylinder geometry. By representing the many-body Hamiltonian as a matrix-product-operator (MPO) and using single-site update and density matrix correction, we show that our code can efficiently search the ground state of various FQH systems. We also compare the performance of our code with traditional DMRG. The possible generalization of our code to infinite FQH systems and other physical systems is also discussed.

  12. Developmental assessment of the Fort St. Vrain version of the Composite HTGR Analysis Program (CHAP-2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroh, K.R.

    1980-01-01

    The Composite HTGR Analysis Program (CHAP) consists of a model-independent systems analysis mainframe named LASAN and model-dependent linked code modules, each representing a component, subsystem, or phenomenon of an HTGR plant. The Fort St. Vrain (FSV) version (CHAP-2) includes 21 coded modules that model the neutron kinetics and thermal response of the core; the thermal-hydraulics of the reactor primary coolant system, secondary steam supply system, and balance-of-plant; the actions of the control system and plant protection system; the response of the reactor building; and the relative hazard resulting from fuel particle failure. FSV steady-state and transient plant data are beingmore » used to partially verify the component modeling and dynamic smulation techniques used to predict plant response to postulated accident sequences.« less

  13. JSPAM: A restricted three-body code for simulating interacting galaxies

    NASA Astrophysics Data System (ADS)

    Wallin, J. F.; Holincheck, A. J.; Harvey, A.

    2016-07-01

    Restricted three-body codes have a proven ability to recreate much of the disturbed morphology of actual interacting galaxies. As more sophisticated n-body models were developed and computer speed increased, restricted three-body codes fell out of favor. However, their supporting role for performing wide searches of parameter space when fitting orbits to real systems demonstrates a continuing need for their use. Here we present the model and algorithm used in the JSPAM code. A precursor of this code was originally described in 1990, and was called SPAM. We have recently updated the software with an alternate potential and a treatment of dynamical friction to more closely mimic the results from n-body tree codes. The code is released publicly for use under the terms of the Academic Free License ("AFL") v. 3.0 and has been added to the Astrophysics Source Code Library.

  14. Analysis of Intelligent Transportation Systems Using Model-Driven Simulations.

    PubMed

    Fernández-Isabel, Alberto; Fuentes-Fernández, Rubén

    2015-06-15

    Intelligent Transportation Systems (ITSs) integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use.

  15. Analysis of Intelligent Transportation Systems Using Model-Driven Simulations

    PubMed Central

    Fernández-Isabel, Alberto; Fuentes-Fernández, Rubén

    2015-01-01

    Intelligent Transportation Systems (ITSs) integrate information, sensor, control, and communication technologies to provide transport related services. Their users range from everyday commuters to policy makers and urban planners. Given the complexity of these systems and their environment, their study in real settings is frequently unfeasible. Simulations help to address this problem, but present their own issues: there can be unintended mistakes in the transition from models to code; their platforms frequently bias modeling; and it is difficult to compare works that use different models and tools. In order to overcome these problems, this paper proposes a framework for a model-driven development of these simulations. It is based on a specific modeling language that supports the integrated specification of the multiple facets of an ITS: people, their vehicles, and the external environment; and a network of sensors and actuators conveniently arranged and distributed that operates over them. The framework works with a model editor to generate specifications compliant with that language, and a code generator to produce code from them using platform specifications. There are also guidelines to help researchers in the application of this infrastructure. A case study on advanced management of traffic lights with cameras illustrates its use. PMID:26083232

  16. Earth Observing System (EOS) Communication (Ecom) Modeling, Analysis, and Testbed (EMAT) activiy

    NASA Technical Reports Server (NTRS)

    Desai, Vishal

    1994-01-01

    This paper describes the Earth Observing System (EOS) Communication (Ecom) Modeling, Analysis, and Testbed (EMAT) activity performed by Code 540 in support of the Ecom project. Ecom is the ground-to-ground data transport system for operational EOS traffic. The National Aeronautic and Space Administration (NASA) Communications (Nascom) Division, Code 540, is responsible for implementing Ecom. Ecom interfaces with various systems to transport EOS forward link commands, return link telemetry, and science payload data. To understand the complexities surrounding the design and implementation of Ecom, it is necessary that sufficient testbedding, modeling, and analysis be conducted prior to the design phase. These activities, when grouped, are referred to as the EMAT activity. This paper describes work accomplished to date in each of the three major EMAT activities: modeling, analysis, and testbedding.

  17. Functional Requirements of a Target Description System for Vulnerability Analysis

    DTIC Science & Technology

    1979-11-01

    called GIFT .1,2 Together the COMGEOM description model and GIFT codes make up the BRL’s target description system. The significance of a target...and modifying target descriptions are described. 1 Lawrence W. Bain, Jr. and Mathew J. Reisinger, "The GIFT Code User Manual; Volume 1...34The GIFT Code User Manual; Volume II, The Output Options," unpublished draft of BRL report. II. UNDERLYING PHILOSOPHY The BRL has a computer

  18. Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language

    NASA Astrophysics Data System (ADS)

    Heaphy, R. T.; Burke, M. P.; Love, J. T.

    2015-12-01

    Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.

  19. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less

  20. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. RivGen, Igiugig Deployment, Control System Specifications and Models

    DOE Data Explorer

    Forbush, Dominic; Cavagnaro, Robert J.; Guerra, Maricarmen; Donegan, James; McEntee, Jarlath; Thomson, Jim; Polagye, Brian; Fabien, Brian; Kilcher, Levi

    2016-03-21

    Control System simulation models, case studies, and processing codes for analyzing field data. Raw data files included from VFD and SCADA. MatLab and Simulink are required to open some data files and all model files.

  2. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  3. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  4. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  6. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  7. Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando

    2017-10-01

    We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.

  8. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  9. Study of information transfer optimization for communication satellites

    NASA Technical Reports Server (NTRS)

    Odenwalder, J. P.; Viterbi, A. J.; Jacobs, I. M.; Heller, J. A.

    1973-01-01

    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described.

  10. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  11. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  12. Creating and Testing Simulation Software

    NASA Technical Reports Server (NTRS)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  13. The Composite Analytic and Simulation Package or RFI (CASPR) on a coded channel

    NASA Technical Reports Server (NTRS)

    Freedman, Jeff; Berman, Ted

    1993-01-01

    CASPR is an analysis package which determines the performance of a coded signal in the presence of Radio Frequency Interference (RFI) and Additive White Gaussian Noise (AWGN). It can analyze a system with convolutional coding, Reed-Solomon (RS) coding, or a concatenation of the two. The signals can either be interleaved or non-interleaved. The model measures the system performance in terms of either the E(sub b)/N(sub 0) required to achieve a given Bit Error Rate (BER) or the BER needed for a constant E(sub b)/N(sub 0).

  14. The ASC Sequoia Programming Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seager, M

    2008-08-06

    In the late 1980's and early 1990's, Lawrence Livermore National Laboratory was deeply engrossed in determining the next generation programming model for the Integrated Design Codes (IDC) beyond vectorization for the Cray 1s series of computers. The vector model, developed in mid 1970's first for the CDC 7600 and later extended from stack based vector operation to memory to memory operations for the Cray 1s, lasted approximately 20 years (See Slide 5). The Cray vector era was deemed an extremely long lived era as it allowed vector codes to be developed over time (the Cray 1s were faster in scalarmore » mode than the CDC 7600) with vector unit utilization increasing incrementally over time. The other attributes of the Cray vector era at LLNL were that we developed, supported and maintained the Operating System (LTSS and later NLTSS), communications protocols (LINCS), Compilers (Civic Fortran77 and Model), operating system tools (e.g., batch system, job control scripting, loaders, debuggers, editors, graphics utilities, you name it) and math and highly machine optimized libraries (e.g., SLATEC, and STACKLIB). Although LTSS was adopted by Cray for early system generations, they later developed COS and UNICOS operating systems and environment on their own. In the late 1970s and early 1980s two trends appeared that made the Cray vector programming model (described above including both the hardware and system software aspects) seem potentially dated and slated for major revision. These trends were the appearance of low cost CMOS microprocessors and their attendant, departmental and mini-computers and later workstations and personal computers. With the wide spread adoption of Unix in the early 1980s, it appeared that LLNL (and the other DOE Labs) would be left out of the mainstream of computing without a rapid transition to these 'Killer Micros' and modern OS and tools environments. The other interesting advance in the period is that systems were being developed with multiple 'cores' in them and called Symmetric Multi-Processor or Shared Memory Processor (SMP) systems. The parallel revolution had begun. The Laboratory started a small 'parallel processing project' in 1983 to study the new technology and its application to scientific computing with four people: Tim Axelrod, Pete Eltgroth, Paul Dubois and Mark Seager. Two years later, Eugene Brooks joined the team. This team focused on Unix and 'killer micro' SMPs. Indeed, Eugene Brooks was credited with coming up with the 'Killer Micro' term. After several generations of SMP platforms (e.g., Sequent Balance 8000 with 8 33MHz MC32032s, Allian FX8 with 8 MC68020 and FPGA based Vector Units and finally the BB&N Butterfly with 128 cores), it became apparent to us that the killer micro revolution would indeed take over Crays and that we definitely needed a new programming and systems model. The model developed by Mark Seager and Dale Nielsen focused on both the system aspects (Slide 3) and the code development aspects (Slide 4). Although now succinctly captured in two attached slides, at the time there was tremendous ferment in the research community as to what parallel programming model would emerge, dominate and survive. In addition, we wanted a model that would provide portability between platforms of a single generation but also longevity over multiple--and hopefully--many generations. Only after we developed the 'Livermore Model' and worked it out in considerable detail did it become obvious that what we came up with was the right approach. In a nutshell, the applications programming model of the Livermore Model posited that SMP parallelism would ultimately not scale indefinitely and one would have to bite the bullet and implement MPI parallelism within the Integrated Design Code (IDC). We also had a major emphasis on doing everything in a completely standards based, portable methodology with POSIX/Unix as the target environment. We decided against specialized libraries like STACKLIB for performance, but kept as many general purpose, portable math libraries as were needed by the codes. Third, we assumed that the SMPs in clusters would evolve in time to become more powerful, feature rich and, in particular, offer more cores. Thus, we focused on OpenMP, and POSIX PThreads for programming SMP parallelism. These code porting efforts were lead by Dale Nielsen, A-Division code group leader, and Randy Christensen, B-Division code group leader. Most of the porting effort revolved removing 'Crayisms' in the codes: artifacts of LTSS/NLTSS, Civic compiler extensions beyond Fortran77, IO libraries and dealing with new code control languages (we switched to Perl and later to Python). Adding MPI to the codes was initially problematic and error prone because the programmers used MPI directly and sprinkled the calls throughout the code.« less

  15. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Song, Y T; Chao, Y

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds ofmore » processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.« less

  16. Robust Modeling of Stellar Triples in PHOEBE

    NASA Astrophysics Data System (ADS)

    Conroy, Kyle E.; Prsa, Andrej; Horvat, Martin; Stassun, Keivan G.

    2017-01-01

    The number of known mutually-eclipsing stellar triple and multiple systems has increased greatly during the Kepler era. These systems provide significant opportunities to both determine fundamental stellar parameters of benchmark systems to unprecedented precision as well as to study the dynamical interaction and formation mechanisms of stellar and planetary systems. Modeling these systems to their full potential, however, has not been feasible until recently. Most existing available codes are restricted to the two-body binary case and those that do provide N-body support for more components make sacrifices in precision by assuming no stellar surface distortion. We have completely redesigned and rewritten the PHOEBE binary modeling code to incorporate support for triple and higher-order systems while also robustly modeling data with Kepler precision. Here we present our approach, demonstrate several test cases based on real data, and discuss the current status of PHOEBE's support for modeling these types of systems. PHOEBE is funded in part by NSF grant #1517474.

  17. Code-Time Diversity for Direct Sequence Spread Spectrum Systems

    PubMed Central

    Hassan, A. Y.

    2014-01-01

    Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925

  18. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  19. Potential capabilities of Reynolds stress turbulence model in the COMMIX-RSM code

    NASA Technical Reports Server (NTRS)

    Chang, F. C.; Bottoni, M.

    1994-01-01

    A Reynolds stress turbulence model has been implemented in the COMMIX code, together with transport equations describing turbulent heat fluxes, variance of temperature fluctuations, and dissipation of turbulence kinetic energy. The model has been verified partially by simulating homogeneous turbulent shear flow, and stable and unstable stratified shear flows with strong buoyancy-suppressing or enhancing turbulence. This article outlines the model, explains the verifications performed thus far, and discusses potential applications of the COMMIX-RSM code in several domains, including, but not limited to, analysis of thermal striping in engineering systems, simulation of turbulence in combustors, and predictions of bubbly and particulate flows.

  20. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  1. Simulation of Combustion Systems with Realistic g-jitter

    NASA Technical Reports Server (NTRS)

    Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.

    2003-01-01

    In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.

  2. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  3. Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1994-01-01

    An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.

  4. Combining Satellite Ocean Color Imagery and Circulation Modeling to Forecast Bio-Optical Properties: Comparison of Models and Advection Schemes

    DTIC Science & Technology

    2008-10-01

    Director NCST E. R. Franchi , 7000 ^^M^4^k ro£— 4// 2^/s y Public Affairs (Unclassified/ Unlimited Only), Code 7030 4 Division, Code Author, Code...from the Navy Operational Global Atmospheric Prediction System (NOGAPS, Hogan and Rosmond, 1991) and assimilates data via the Navy Coupled Ocean...forecasts using Global , Atlantic, Gulf of Mexico, and northern Gulf of Mexico configurations of HYCOM. Proceedings, Ocean Optics XIX, Castelvecchio Pascoli

  5. Use of statecharts in the modelling of dynamic behaviour in the ATLAS DAQ prototype-1

    NASA Astrophysics Data System (ADS)

    Croll, P.; Duval, P.-Y.; Jones, R.; Kolos, S.; Sari, R. F.; Wheeler, S.

    1998-08-01

    Many applications within the ATLAS DAQ prototype-1 system have complicated dynamic behaviour which can be successfully modelled in terms of states and transitions between states. Previously, state diagrams implemented as finite-state machines have been used. Although effective, they become ungainly as system size increases. Harel statecharts address this problem by implementing additional features such as hierarchy and concurrency. The CHSM object-oriented language system is freeware which implements Harel statecharts as concurrent, hierarchical, finite-state machines (CHSMs). An evaluation of this language system by the ATLAS DAQ group has shown it to be suitable for describing the dynamic behaviour of typical DAQ applications. The language is currently being used to model the dynamic behaviour of the prototype-1 run-control system. The design is specified by means of a CHSM description file, and C++ code is obtained by running the CHSM compiler on the file. In parallel with the modelling work, a code generator has been developed which translates statecharts, drawn using the StP CASE tool, into the CHSM language. C++ code, describing the dynamic behaviour of the run-control system, has been successfully generated directly from StP statecharts using the CHSM generator and compiler. The validity of the design was tested using the simulation features of the Statemate CASE tool.

  6. Standardized verification of fuel cycle modeling

    DOE PAGES

    Feng, B.; Dixon, B.; Sunny, E.; ...

    2016-04-05

    A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less

  7. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  8. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  9. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  11. NCI HPC Scaling and Optimisation in Climate, Weather, Earth system science and the Geosciences

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Bermous, I.; Freeman, J.; Roberts, D. S.; Ward, M. L.; Yang, R.

    2016-12-01

    The Australian National Computational Infrastructure (NCI) has a national focus in the Earth system sciences including climate, weather, ocean, water management, environment and geophysics. NCI leads a Program across its partners from the Australian science agencies and research communities to identify priority computational models to scale-up. Typically, these cases place a large overall demand on the available computer time, need to scale to higher resolutions, use excessive scarce resources such as large memory or bandwidth that limits, or in some cases, need to meet requirements for transition to a separate operational forecasting system, with set time-windows. The model codes include the UK Met Office Unified Model atmospheric model (UM), GFDL's Modular Ocean Model (MOM), both the UK Met Office's GC3 and Australian ACCESS coupled-climate systems (including sea ice), 4D-Var data assimilation and satellite processing, the Regional Ocean Model (ROMS), and WaveWatch3 as well as geophysics codes including hazards, magentuellerics, seismic inversions, and geodesy. Many of these codes use significant compute resources both for research applications as well as within the operational systems. Some of these models are particularly complex, and their behaviour had not been critically analysed for effective use of the NCI supercomputer or how they could be improved. As part of the Program, we have established a common profiling methodology that uses a suite of open source tools for performing scaling analyses. The most challenging cases are profiling multi-model coupled systems where the component models have their own complex algorithms and performance issues. We have also found issues within the current suite of profiling tools, and no single tool fully exposes the nature of the code performance. As a result of this work, international collaborations are now in place to ensure that improvements are incorporated within the community models, and our effort can be targeted in a coordinated way. The coordinations have involved user stakeholders, the model developer community, and dependent software libraries. For example, we have spent significant time characterising I/O scalability, and improving the use of libraries such as NetCDF and HDF5.

  12. Cerebral Laterality and Verbal Processes

    ERIC Educational Resources Information Center

    Sherman, Jay L.; And Others

    1976-01-01

    Research suggests that we process information by way of two distinct and functionally separate coding systems. Their location, somewhat dependent on cerebral laterality, varies in right- and left-handed persons. Tests this dual coding model. (Editor/RK)

  13. Integrated Devices and Systems | Grid Modernization | NREL

    Science.gov Websites

    storage models Microgrids Microgrids Grid Simulation and Power Hardware-in-the-Loop Grid simulation and power hardware-in-the-loop Grid Standards and Codes Standards and codes Contact Barry Mather, Ph.D

  14. Utilization of recently developed codes for high power Brayton and Rankine cycle power systems

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    1993-01-01

    Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.

  15. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  16. Applicability of Existing C3 (Command, Control and Communications) Vulnerability and Hardness Analyses to Sentry System Issues.

    DTIC Science & Technology

    1983-01-13

    Naval .1 Ordnance Systems Command ) codes are detailed propagation simulations mostly at lower frequencies . These are combined with WEPH code phenomenology...AD B062349L. Scope /Abstract: This report describes a simple model for predicting the loads on box-like target structures subject to air blast. A... model and applying it to targets which can be approximated by a series of rectangular parallelopipeds. In this report the physical phenomena of high

  17. Errors from approximation of ODE systems with reduced order models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevska, Tanya

    2016-12-30

    This is a code to calculate the error from approximation of systems of ordinary differential equations (ODEs) by using Proper Orthogonal Decomposition (POD) Reduced Order Models (ROM) methods and to compare and analyze the errors for two POD ROM variants. The first variant is the standard POD ROM, the second variant is a modification of the method using the values of the time derivatives (a.k.a. time-derivative snapshots). The code compares the errors from the two variants under different conditions.

  18. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  19. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  20. Engineering High Assurance Distributed Cyber Physical Systems

    DTIC Science & Technology

    2015-01-15

    decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service

  1. Umbra (core)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.

    Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobromir Panayotov; Andrew Grief; Brad J. Merrill

    'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platania, P., E-mail: platania@ifp.cnr.it; Figini, L.; Farina, D.

    The purpose of this work is the optical modeling and physical performances evaluations of the JT-60SA ECRF launcher system. The beams have been simulated with the electromagnetic code GRASP® and used as input for ECCD calculations performed with the beam tracing code GRAY, capable of modeling propagation, absorption and current drive of an EC Gaussion beam with general astigmatism. Full details of the optical analysis has been taken into account to model the launched beams. Inductive and advanced reference scenarios has been analysed for physical evaluations in the full poloidal and toroidal steering ranges for two slightly different layouts ofmore » the launcher system.« less

  4. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  5. Constructing graph models for software system development and analysis

    NASA Astrophysics Data System (ADS)

    Pogrebnoy, Andrey V.

    2017-01-01

    We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.

  6. Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons

    PubMed Central

    Ralston, Bridget N.; Flagg, Lucas Q.; Faggin, Eric

    2016-01-01

    For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the “history”). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates “history” into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106

  7. Energy Storage System Safety: Plan Review and Inspection Checklist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Pam C.; Conover, David R.

    Codes, standards, and regulations (CSR) governing the design, construction, installation, commissioning, and operation of the built environment are intended to protect the public health, safety, and welfare. While these documents change over time to address new technology and new safety challenges, there is generally some lag time between the introduction of a technology into the market and the time it is specifically covered in model codes and standards developed in the voluntary sector. After their development, there is also a timeframe of at least a year or two until the codes and standards are adopted. Until existing model codes andmore » standards are updated or new ones are developed and then adopted, one seeking to deploy energy storage technologies or needing to verify the safety of an installation may be challenged in trying to apply currently implemented CSRs to an energy storage system (ESS). The Energy Storage System Guide for Compliance with Safety Codes and Standards1 (CG), developed in June 2016, is intended to help address the acceptability of the design and construction of stationary ESSs, their component parts, and the siting, installation, commissioning, operations, maintenance, and repair/renovation of ESS within the built environment.« less

  8. Particle bed reactor modeling

    NASA Technical Reports Server (NTRS)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    1993-01-01

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  9. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  10. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  11. Convolutional code performance in planetary entry channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.

    1974-01-01

    The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.

  12. FLUSH: A tool for the design of slush hydrogen flow systems

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.

    1990-01-01

    As part of the National Aerospace Plane Project an analytical model was developed to perform calculations for in-line transfer of solid-liquid mixtures of hydrogen. This code, called FLUSH, calculates pressure drop and solid fraction loss for the flow of slush hydrogen through pipe systems. The model solves the steady-state, one-dimensional equation of energy to obtain slush loss estimates. A description of the code is provided as well as a guide for users of the program. Preliminary results are also presented showing the anticipated degradation of slush hydrogen solid content for various piping systems.

  13. Development and Implementation of Dynamic Scripts to Execute Cycled WRF/GSI Forecasts

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Quanli; Watson, Leela

    2014-01-01

    Automating the coupling of data assimilation (DA) and modeling systems is a unique challenge in the numerical weather prediction (NWP) research community. In recent years, the Development Testbed Center (DTC) has released well-documented tools such as the Weather Research and Forecasting (WRF) model and the Gridpoint Statistical Interpolation (GSI) DA system that can be easily downloaded, installed, and run by researchers on their local systems. However, developing a coupled system in which the various preprocessing, DA, model, and postprocessing capabilities are all integrated can be labor-intensive if one has little experience with any of these individual systems. Additionally, operational modeling entities generally have specific coupling methodologies that can take time to understand and develop code to implement properly. To better enable collaborating researchers to perform modeling and DA experiments with GSI, the Short-term Prediction Research and Transition (SPoRT) Center has developed a set of Perl scripts that couple GSI and WRF in a cycling methodology consistent with the use of real-time, regional observation data from the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC). Because Perl is open source, the code can be easily downloaded and executed regardless of the user's native shell environment. This paper will provide a description of this open-source code and descriptions of a number of the use cases that have been performed by SPoRT collaborators using the scripts on different computing systems.

  14. Combustion system CFD modeling at GE Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Burrus, D.; Mongia, H.; Tolpadi, Anil K.; Correa, S.; Braaten, M.

    1995-01-01

    This viewgraph presentation discusses key features of current combustion system CFD modeling capabilities at GE Aircraft Engines provided by the CONCERT code; CONCERT development history; modeling applied for designing engine combustion systems; modeling applied to improve fundamental understanding; CONCERT3D results for current production combustors; CONCERT3D model of NASA/GE E3 combustor; HYBRID CONCERT CFD/Monte-Carlo modeling approach; and future modeling directions.

  15. Combustion system CFD modeling at GE Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Burrus, D.; Mongia, H.; Tolpadi, Anil K.; Correa, S.; Braaten, M.

    1995-03-01

    This viewgraph presentation discusses key features of current combustion system CFD modeling capabilities at GE Aircraft Engines provided by the CONCERT code; CONCERT development history; modeling applied for designing engine combustion systems; modeling applied to improve fundamental understanding; CONCERT3D results for current production combustors; CONCERT3D model of NASA/GE E3 combustor; HYBRID CONCERT CFD/Monte-Carlo modeling approach; and future modeling directions.

  16. NEAMS Update. Quarterly Report for October - December 2011.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, K.

    2012-02-16

    The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less

  17. Automatic Processing of Reactive Polymers

    NASA Technical Reports Server (NTRS)

    Roylance, D.

    1985-01-01

    A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.

  18. SWB-A modified Thornthwaite-Mather Soil-Water-Balance code for estimating groundwater recharge

    USGS Publications Warehouse

    Westenbroek, S.M.; Kelson, V.A.; Dripps, W.R.; Hunt, R.J.; Bradbury, K.R.

    2010-01-01

    A Soil-Water-Balance (SWB) computer code has been developed to calculate spatial and temporal variations in groundwater recharge. The SWB model calculates recharge by use of commonly available geographic information system (GIS) data layers in combination with tabular climatological data. The code is based on a modified Thornthwaite-Mather soil-water-balance approach, with components of the soil-water balance calculated at a daily timestep. Recharge calculations are made on a rectangular grid of computational elements that may be easily imported into a regional groundwater-flow model. Recharge estimates calculated by the code may be output as daily, monthly, or annual values.

  19. Complete analysis of steady and transient missile aerodynamic/propulsive/plume flowfield interactions

    NASA Astrophysics Data System (ADS)

    York, B. J.; Sinha, N.; Dash, S. M.; Hosangadi, A.; Kenzakowski, D. C.; Lee, R. A.

    1992-07-01

    The analysis of steady and transient aerodynamic/propulsive/plume flowfield interactions utilizing several state-of-the-art computer codes (PARCH, CRAFT, and SCHAFT) is discussed. These codes have been extended to include advanced turbulence models, generalized thermochemistry, and multiphase nonequilibrium capabilities. Several specialized versions of these codes have been developed for specific applications. This paper presents a brief overview of these codes followed by selected cases demonstrating steady and transient analyses of conventional as well as advanced missile systems. Areas requiring upgrades include turbulence modeling in a highly compressible environment and the treatment of particulates in general. Recent progress in these areas are highlighted.

  20. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  1. A user's manual for DELSOL3: A computer code for calculating the optical performance and optimal system design for solar thermal central receiver plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kistler, B.L.

    DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less

  2. ASTROP2 users manual: A program for aeroelastic stability analysis of propfans

    NASA Technical Reports Server (NTRS)

    Narayanan, G. V.; Kaza, K. R. V.

    1991-01-01

    A user's manual is presented for the aeroelastic stability and response of propulsion systems computer program called ASTROP2. The ASTROP2 code preforms aeroelastic stability analysis of rotating propfan blades. This analysis uses a two-dimensional, unsteady cascade aerodynamics model and a three-dimensional, normal-mode structural model. Analytical stability results from this code are compared with published experimental results of a rotating composite advanced turboprop model and of nonrotating metallic wing model.

  3. INDOS: conversational computer codes to implement ICRP-10-10A models for estimation of internal radiation dose to man

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Killough, G.G.; Rohwer, P.S.

    1974-03-01

    INDOS1, INDOS2, and INDOS3 (the INDOS codes) are conversational FORTRAN IV programs, implemented for use in time-sharing mode on the ORNL PDP-10 System. These codes use ICRP10-10A models to estimate the radiation dose to an organ of the body of Reference Man resulting from the ingestion or inhalation of any one of various radionuclides. Two patterns of intake are simulated: intakes at discrete times and continuous intake at a constant rate. The IND0S codes provide tabular output of dose rate and dose vs time, graphical output of dose vs time, and punched-card output of organ burden and dose vs time.more » The models of internal dose calculation are discussed and instructions for the use of the INDOS codes are provided. The INDOS codes are available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, P. O. Box X, Oak Ridge, Tennessee 37830. (auth)« less

  4. RELAP-7 Development Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Zhao, Haihua; Gleicher, Frederick Nathan

    RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory, and is the next generation tool in the RELAP reactor safety/systems analysis application series. RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is beingmore » developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). During Fiscal Year (FY) 2015, the RELAP-7 code has been further improved with expanded capability to support boiling water reactor (BWR) and pressurized water reactor NPPs analysis. The accumulator model has been developed. The code has also been coupled with other MOOSE-based applications such as neutronics code RattleSnake and fuel performance code BISON to perform multiphysics analysis. A major design requirement for the implicit algorithm in RELAP-7 is that it is capable of second-order discretization accuracy in both space and time, which eliminates the traditional first-order approximation errors. The second-order temporal is achieved by a second-order backward temporal difference, and the one-dimensional second-order accurate spatial discretization is achieved with the Galerkin approximation of Lagrange finite elements. During FY-2015, we have done numerical verification work to verify that the RELAP-7 code indeed achieves 2nd-order accuracy in both time and space for single phase models at the system level.« less

  5. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  6. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  7. Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.

    2013-12-01

    Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.

  8. Multiobjective Collaborative Optimization of Systems of Systems

    DTIC Science & Technology

    2005-06-01

    K: HSC MODEL AND OPTIMIZATION DESCRIPTION ................................................ 157 APPENDIX L: HSC OPTIMIZATION CODE...7 0 Table 6. System Variables of FPF Data Set Showing Minimal HSC Impact on...App.E, F) Data Analysis Front ITS Model (App. I, J) Chap.] 1 ConclusionsSHSC Model (App. K, L) Cot[& HSC Model (App. M, NV) MoeJ Future Work Figure

  9. TRAC-PF1 code verification with data from the OTIS test facility. [Once-Through Intergral System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childerson, M.T.; Fujita, R.K.

    1985-01-01

    A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code wasmore » successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer.« less

  10. Coherent direct sequence optical code multiple access encoding-decoding efficiency versus wavelength detuning.

    PubMed

    Pastor, D; Amaya, W; García-Olcina, R; Sales, S

    2007-07-01

    We present a simple theoretical model of and the experimental verification for vanishing of the autocorrelation peak due to wavelength detuning on the coding-decoding process of coherent direct sequence optical code multiple access systems based on a superstructured fiber Bragg grating. Moreover, the detuning vanishing effect has been explored to take advantage of this effect and to provide an additional degree of multiplexing and/or optical code tuning.

  11. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  12. Development of a Stirling System Dynamic Model With Enhanced Thermodynamics

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2005-01-01

    The Stirling Convertor System Dynamic Model developed at NASA Glenn Research Center is a software model developed from first principles that includes the mechanical and mounting dynamics, the thermodynamics, the linear alternator, and the controller of a free-piston Stirling power convertor, along with the end user load. As such it represents the first detailed modeling tool for fully integrated Stirling convertor-based power systems. The thermodynamics of the model were originally a form of the isothermal Stirling cycle. In some situations it may be desirable to improve the accuracy of the Stirling cycle portion of the model. An option under consideration is to enhance the SDM thermodynamics by coupling the model with Gedeon Associates Sage simulation code. The result will be a model that gives a more accurate prediction of the performance and dynamics of the free-piston Stirling convertor. A method of integrating the Sage simulation code with the System Dynamic Model is described. Results of SDM and Sage simulation are compared to test data. Model parameter estimation and model validation are discussed.

  13. Development of a Stirling System Dynamic Model with Enhanced Thermodynamics

    NASA Astrophysics Data System (ADS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2005-02-01

    The Stirling Convertor System Dynamic Model developed at NASA Glenn Research Center is a software model developed from first principles that includes the mechanical and mounting dynamics, the thermodynamics, the linear alternator, and the controller of a free-piston Stirling power convertor, along with the end user load. As such it represents the first detailed modeling tool for fully integrated Stirling convertor-based power systems. The thermodynamics of the model were originally a form of the isothermal Stirling cycle. In some situations it may be desirable to improve the accuracy of the Stirling cycle portion of the model. An option under consideration is to enhance the SDM thermodynamics by coupling the model with Gedeon Associates' Sage simulation code. The result will be a model that gives a more accurate prediction of the performance and dynamics of the free-piston Stirling convertor. A method of integrating the Sage simulation code with the System Dynamic Model is described. Results of SDM and Sage simulation are compared to test data. Model parameter estimation and model validation are discussed.

  14. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  15. Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation

    NASA Astrophysics Data System (ADS)

    Ragni, Matteo

    There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.

  16. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  17. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  18. TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.

  19. Complexity, information loss, and model building: from neuro- to cognitive dynamics

    NASA Astrophysics Data System (ADS)

    Arecchi, F. Tito

    2007-06-01

    A scientific problem described within a given code is mapped by a corresponding computational problem, We call complexity (algorithmic) the bit length of the shortest instruction which solves the problem. Deterministic chaos in general affects a dynamical systems making the corresponding problem experimentally and computationally heavy, since one must reset the initial conditions at a rate higher than that of information loss (Kolmogorov entropy). One can control chaos by adding to the system new degrees of freedom (information swapping: information lost by chaos is replaced by that arising from the new degrees of freedom). This implies a change of code, or a new augmented model. Within a single code, changing hypotheses is equivalent to fixing different sets of control parameters, each with a different a-priori probability, to be then confirmed and transformed to an a-posteriori probability via Bayes theorem. Sequential application of Bayes rule is nothing else than the Darwinian strategy in evolutionary biology. The sequence is a steepest ascent algorithm, which stops once maximum probability has been reached. At this point the hypothesis exploration stops. By changing code (and hence the set of relevant variables) one can start again to formulate new classes of hypotheses . We call semantic complexity the number of accessible scientific codes, or models, that describe a situation. It is however a fuzzy concept, in so far as this number changes due to interaction of the operator with the system under investigation. These considerations are illustrated with reference to a cognitive task, starting from synchronization of neuron arrays in a perceptual area and tracing the putative path toward a model building.

  20. A COMPREHENSIVE APPROACH FOR PHYSIOLOGICALLY BASED PHARMACOKINETIC (PBPK) MODELS USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM) SYSTEM

    EPA Science Inventory

    The implementation of a comprehensive PBPK modeling approach resulted in ERDEM, a complex PBPK modeling system. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. ERDEM efficiently m...

  1. Analysis of SMA Hybrid Composite Structures in MSC.Nastran and ABAQUS

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Patel, Hemant D.

    2005-01-01

    A thermoelastic constitutive model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures was recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilever beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilever beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.

  2. Modeling low-temperature geochemical processes: Chapter 2

    USGS Publications Warehouse

    Nordstrom, D. Kirk; Campbell, Kate M.

    2014-01-01

    This chapter provides an overview of geochemical modeling that applies to water–rock interactions under ambient conditions of temperature and pressure. Topics include modeling definitions, historical background, issues of activity coefficients, popular codes and databases, examples of modeling common types of water–rock interactions, and issues of model reliability. Examples include speciation, microbial redox kinetics and ferrous iron oxidation, calcite dissolution, pyrite oxidation, combined pyrite and calcite dissolution, dedolomitization, seawater–carbonate groundwater mixing, reactive-transport modeling in streams, modeling catchments, and evaporation of seawater. The chapter emphasizes limitations to geochemical modeling: that a proper understanding and ability to communicate model results well are as important as completing a set of useful modeling computations and that greater sophistication in model and code development is not necessarily an advancement. If the goal is to understand how a particular geochemical system behaves, it is better to collect more field data than rely on computer codes.

  3. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  4. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    ERIC Educational Resources Information Center

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  5. Control Law Design in a Computational Aeroelasticity Environment

    NASA Technical Reports Server (NTRS)

    Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.

    2003-01-01

    A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.

  6. HEC Applications on Columbia Project

    NASA Technical Reports Server (NTRS)

    Taft, Jim

    2004-01-01

    NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further

  7. Bio-Physical Ocean Modeling in the Gulf of Mexico

    DTIC Science & Technology

    2009-01-01

    up to 1 20-hour forecasts for the region. In this configuration, the model receives (initial) boundary information from the operational 1/8" Global ...NCOM, and it is forced by 3-hourly 1/2° momentum and heat fluxes from the Naval Operational Global Prediction System (NOGAPS). The NCOMGOM model...H. Preller, 7300 Security, Code 1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified

  8. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trambauer, K.

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonablemore » accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.« less

  9. Benchmark Simulation of Natural Circulation Cooling System with Salt Working Fluid Using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, K. K.; Scarlat, R. O.; Hu, R.

    Liquid salt-cooled reactors, such as the Fluoride Salt-Cooled High-Temperature Reactor (FHR), offer passive decay heat removal through natural circulation using Direct Reactor Auxiliary Cooling System (DRACS) loops. The behavior of such systems should be well-understood through performance analysis. The advanced system thermal-hydraulics tool System Analysis Module (SAM) from Argonne National Laboratory has been selected for this purpose. The work presented here is part of a larger study in which SAM modeling capabilities are being enhanced for the system analyses of FHR or Molten Salt Reactors (MSR). Liquid salt thermophysical properties have been implemented in SAM, as well as properties ofmore » Dowtherm A, which is used as a simulant fluid for scaled experiments, for future code validation studies. Additional physics modules to represent phenomena specific to salt-cooled reactors, such as freezing of coolant, are being implemented in SAM. This study presents a useful first benchmark for the applicability of SAM to liquid salt-cooled reactors: it provides steady-state and transient comparisons for a salt reactor system. A RELAP5-3D model of the Mark-1 Pebble-Bed FHR (Mk1 PB-FHR), and in particular its DRACS loop for emergency heat removal, provides steady state and transient results for flow rates and temperatures in the system that are used here for code-to-code comparison with SAM. The transient studied is a loss of forced circulation with SCRAM event. To the knowledge of the authors, this is the first application of SAM to FHR or any other molten salt reactors. While building these models in SAM, any gaps in the code’s capability to simulate such systems are identified and addressed immediately, or listed as future improvements to the code.« less

  10. GUI to Facilitate Research on Biological Damage from Radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Frances A.; Ponomarev, Artem Lvovich

    2010-01-01

    A graphical-user-interface (GUI) computer program has been developed to facilitate research on the damage caused by highly energetic particles and photons impinging on living organisms. The program brings together, into one computational workspace, computer codes that have been developed over the years, plus codes that will be developed during the foreseeable future, to address diverse aspects of radiation damage. These include codes that implement radiation-track models, codes for biophysical models of breakage of deoxyribonucleic acid (DNA) by radiation, pattern-recognition programs for extracting quantitative information from biological assays, and image-processing programs that aid visualization of DNA breaks. The radiation-track models are based on transport models of interactions of radiation with matter and solution of the Boltzmann transport equation by use of both theoretical and numerical models. The biophysical models of breakage of DNA by radiation include biopolymer coarse-grained and atomistic models of DNA, stochastic- process models of deposition of energy, and Markov-based probabilistic models of placement of double-strand breaks in DNA. The program is designed for use in the NT, 95, 98, 2000, ME, and XP variants of the Windows operating system.

  11. Software Cost Estimating,

    DTIC Science & Technology

    1982-05-13

    Size Of The Software. A favourite measure for software system size is linos of operational code, or deliverable code (operational code plus...regression models, these conversions are either derived from productivity measures using the "cost per instruction" type of equation or they are...appropriate to different development organisattons, differert project types, different sets of units for measuring e and s, and different items

  12. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, A.M.M.; Paulson, C.C.; Peacock, M.A.

    1995-10-01

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G.H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decisionmore » has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  13. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.

    1995-09-15

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. Amore » decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  14. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  15. Using Kintsch's Discourse Comprehension Theory To Model the User's Coding of an Informative Message from an Enabling Information Retrieval System.

    ERIC Educational Resources Information Center

    Cole, Charles; Mandelblatt, Bertie

    2000-01-01

    Uses Kintsch's proposition-based construction-integration theory of discourse comprehension to detail the user coding operations that occur in each of the three subsystems (Perception, Comprehension, Application) in which users process an information retrieval systems (IRS) message. Describes an IRS device made up of two separate parts that enable…

  16. Multi-GNSS precise point positioning (MGPPP) using raw observations

    NASA Astrophysics Data System (ADS)

    Liu, Teng; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Tan, Bingfeng; Chen, Yongchang

    2017-03-01

    A joint-processing model for multi-GNSS (GPS, GLONASS, BDS and GALILEO) precise point positioning (PPP) is proposed, in which raw code and phase observations are used. In the proposed model, inter-system biases (ISBs) and GLONASS code inter-frequency biases (IFBs) are carefully considered, among which GLONASS code IFBs are modeled as a linear function of frequency numbers. To get the full rank function model, the unknowns are re-parameterized and the estimable slant ionospheric delays and ISBs/IFBs are derived and estimated simultaneously. One month of data in April, 2015 from 32 stations of the International GNSS Service (IGS) Multi-GNSS Experiment (MGEX) tracking network have been used to validate the proposed model. Preliminary results show that RMS values of the positioning errors (with respect to external double-difference solutions) for static/kinematic solutions (four systems) are 6.2 mm/2.1 cm (north), 6.0 mm/2.2 cm (east) and 9.3 mm/4.9 cm (up). One-day stabilities of the estimated ISBs described by STD values are 0.36 and 0.38 ns, for GLONASS and BDS, respectively. Significant ISB jumps are identified between adjacent days for all stations, which are caused by the different satellite clock datums in different days and for different systems. Unlike ISBs, the estimated GLONASS code IFBs are quite stable for all stations, with an average STD of 0.04 ns over a month. Single-difference experiment of short baseline shows that PPP ionospheric delays are more precise than traditional leveling ionospheric delays.

  17. 76 FR 50898 - Metconazole; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    .../oppefed1/models/water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System... affected. The North American Industrial Classification System (NAICS) codes have been provided to assist... supporting the petition, EPA has modified the levels at which tolerances are being established for the...

  18. An RL10A-3-3A rocket engine model using the rocket engine transient simulator (ROCETS) software

    NASA Technical Reports Server (NTRS)

    Binder, Michael

    1993-01-01

    Steady-state and transient computer models of the RL10A-3-3A rocket engine have been created using the Rocket Engine Transient Simulation (ROCETS) code. These models were created for several purposes. The RL10 engine is a critical component of past, present, and future space missions; the model will give NASA an in-house capability to simulate the performance of the engine under various operating conditions and mission profiles. The RL10 simulation activity is also an opportunity to further validate the ROCETS program. The ROCETS code is an important tool for modeling rocket engine systems at NASA Lewis. ROCETS provides a modular and general framework for simulating the steady-state and transient behavior of any desired propulsion system. Although the ROCETS code is being used in a number of different analysis and design projects within NASA, it has not been extensively validated for any system using actual test data. The RL10A-3-3A has a ten year history of test and flight applications; it should provide sufficient data to validate the ROCETS program capability. The ROCETS models of the RL10 system were created using design information provided by Pratt & Whitney, the engine manufacturer. These models are in the process of being validated using test-stand and flight data. This paper includes a brief description of the models and comparison of preliminary simulation output against flight and test-stand data.

  19. System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann

    2003-01-01

    A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.

  20. A New Design Method of Automotive Electronic Real-time Control System

    NASA Astrophysics Data System (ADS)

    Zuo, Wenying; Li, Yinguo; Wang, Fengjuan; Hou, Xiaobo

    Structure and functionality of automotive electronic control system is becoming more and more complex. The traditional manual programming development mode to realize automotive electronic control system can't satisfy development needs. So, in order to meet diversity and speedability of development of real-time control system, combining model-based design approach and auto code generation technology, this paper proposed a new design method of automotive electronic control system based on Simulink/RTW. Fristly, design algorithms and build a control system model in Matlab/Simulink. Then generate embedded code automatically by RTW and achieve automotive real-time control system development in OSEK/VDX operating system environment. The new development mode can significantly shorten the development cycle of automotive electronic control system, improve program's portability, reusability and scalability and had certain practical value for the development of real-time control system.

  1. Automated target recognition using passive radar and coordinated flight models

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2003-09-01

    Rather than emitting pulses, passive radar systems rely on illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. These systems are particularly attractive since they allow receivers to operate without emitting energy, rendering them covert. Many existing passive radar systems estimate the locations and velocities of targets. This paper focuses on adding an automatic target recognition (ATR) component to such systems. Our approach to ATR compares the Radar Cross Section (RCS) of targets detected by a passive radar system to the simulated RCS of known targets. To make the comparison as accurate as possible, the received signal model accounts for aircraft position and orientation, propagation losses, and antenna gain patterns. The estimated positions become inputs for an algorithm that uses a coordinated flight model to compute probable aircraft orientation angles. The Fast Illinois Solver Code (FISC) simulates the RCS of several potential target classes as they execute the estimated maneuvers. The RCS is then scaled by the Advanced Refractive Effects Prediction System (AREPS) code to account for propagation losses that occur as functions of altitude and range. The Numerical Electromagnetic Code (NEC2) computes the antenna gain pattern, so that the RCS can be further scaled. The Rician model compares the RCS of the illuminated aircraft with those of the potential targets. This comparison results in target identification.

  2. CPMIP: measurements of real computational performance of Earth system models in CMIP6

    NASA Astrophysics Data System (ADS)

    Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett

    2017-01-01

    A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).

  3. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  4. Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.

    PubMed

    Wallace, Rodrick

    2015-06-01

    A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.

  5. Air carrier operations system model

    DOT National Transportation Integrated Search

    2001-03-01

    Representatives from the Federal Aviation Administration (FAA) and several 14 Code of Federal Regulations (CFR) Part 121 air carriers met several times during 1999-2000 to develop a system engineering model of the generic functions of air carrier ope...

  6. Inventory of Safety-related Codes and Standards for Energy Storage Systems with some Experiences related to Approval and Acceptance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, David R.

    The purpose of this document is to identify laws, rules, model codes, codes, standards, regulations, specifications (CSR) related to safety that could apply to stationary energy storage systems (ESS) and experiences to date securing approval of ESS in relation to CSR. This information is intended to assist in securing approval of ESS under current CSR and to identification of new CRS or revisions to existing CRS and necessary supporting research and documentation that can foster the deployment of safe ESS.

  7. Evaluating a Control System Architecture Based on a Formally Derived AOCS Model

    NASA Astrophysics Data System (ADS)

    Ilic, Dubravka; Latvala, Timo; Varpaaniemi, Kimmo; Vaisanen, Pauli; Troubitsyna, Elena; Laibinis, Linas

    2010-08-01

    Attitude & Orbit Control System (AOCS) refers to a wider class of control systems which are used to determine and control the attitude of the spacecraft while in orbit, based on the information obtained from various sensors. In this paper, we propose an approach to evaluate a typical (yet somewhat simplified) AOCS architecture using formal development - based on the Event-B method. As a starting point, an Ada specification of the AOCS is translated into a formal specification and further refined to incorporate all the details of its original source code specification. This way we are able not only to evaluate the Ada specification by expressing and verifying specific system properties in our formal models, but also to determine how well the chosen modelling framework copes with the level of detail required for an actual implementation and code generation from the derived models.

  8. Recent Improvements of Particle and Heavy Ion Transport code System: PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Iwamoto, Yosuke; Hashimoto, Shintaro; Ogawa, Tatsuhiko; Furuta, Takuya; Abe, Shin-ichiro; Kai, Takeshi; Matsuda, Norihiro; Okumura, Keisuke; Kai, Tetsuya; Iwase, Hiroshi; Sihver, Lembit

    2017-09-01

    The Particle and Heavy Ion Transport code System, PHITS, has been developed under the collaboration of several research institutes in Japan and Europe. This system can simulate the transport of most particles with energy levels up to 1 TeV (per nucleon for ion) using different nuclear reaction models and data libraries. More than 2,500 registered researchers and technicians have used this system for various applications such as accelerator design, radiation shielding and protection, medical physics, and space- and geo-sciences. This paper summarizes the physics models and functions recently implemented in PHITS, between versions 2.52 and 2.88, especially those related to source generation useful for simulating brachytherapy and internal exposures of radioisotopes.

  9. RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Andrs; Ray Berry; Derek Gaston

    The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less

  10. The Environment-Power System Analysis Tool development program. [for spacecraft power supplies

    NASA Technical Reports Server (NTRS)

    Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Wilcox, Katherine G.; Stevens, N. John; Putnam, Rand M.; Roche, James C.

    1989-01-01

    The Environment Power System Analysis Tool (EPSAT) is being developed to provide engineers with the ability to assess the effects of a broad range of environmental interactions on space power systems. A unique user-interface-data-dictionary code architecture oversees a collection of existing and future environmental modeling codes (e.g., neutral density) and physical interaction models (e.g., sheath ionization). The user-interface presents the engineer with tables, graphs, and plots which, under supervision of the data dictionary, are automatically updated in response to parameter change. EPSAT thus provides the engineer with a comprehensive and responsive environmental assessment tool and the scientist with a framework into which new environmental or physical models can be easily incorporated.

  11. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  12. Consumer acceptance of a quick response (QR) code for the food traceability system: Application of an extended technology acceptance model (TAM).

    PubMed

    Kim, Yeong Gug; Woo, Eunju

    2016-07-01

    The objectives of this study are to apply the TAM using the addition of perceived information to individuals' behavioral intention to use the QR code for the food traceability system; and to determine the moderating effects of food involvement on the relationship between perceived information and perceived usefulness. Results from a survey of 420 respondents are analyzed using structural equation modeling. The study findings reveal that the extended TAM has a satisfactory fit to the data and that the underlying dimensions have a significant effect on consumers' intention to use the QR code for the food traceability system. In addition, food involvement plays a significant moderating function in the relationship between perceived information and perceived usefulness. The implications of this study for future research are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Design of a recovery system for a reentry vehicle

    NASA Technical Reports Server (NTRS)

    Von Eckroth, Wulf; Garrard, William L.; Miller, Norman

    1993-01-01

    Engineers are often required to design decelerator systems which are deployed in cross-wind orientations. If the system is not designed to minimize 'line sail', damage to the parachutes could result. A Reentry Vehicle Analysis Code (RVAC) and an accompanying graphics animation software program (DISPLAY) are presented in this paper. These computer codes allow the user to quickly apply the Purvis line sail modeling technique to any vehicle and then observe the relative motion of the vehicle, nose cap, suspension lines, pilot and drogue bags and canopies on a computer screen. Data files are created which allow plots of velocities, spacial positions, and dynamic pressures versus time to be generated. The code is an important tool for the design engineer because it integrates two degrees of freedom (DOF) line sail equations with a three DOF model of the reentry body and jettisoned nose cap to provide an animated output.

  14. Assessment of Spacecraft Systems Integration Using the Electric Propulsion Interactions Code (EPIC)

    NASA Technical Reports Server (NTRS)

    Mikellides, Ioannis G.; Kuharski, Robert A.; Mandell, Myron J.; Gardner, Barbara M.; Kauffman, William J. (Technical Monitor)

    2002-01-01

    SAIC is currently developing the Electric Propulsion Interactions Code 'EPIC', an interactive computer tool that allows the construction of a 3-D spacecraft model, and the assessment of interactions between its subsystems and the plume from an electric thruster. EPIC unites different computer tools to address the complexity associated with the interaction processes. This paper describes the overall architecture and capability of EPIC including the physics and algorithms that comprise its various components. Results from selected modeling efforts of different spacecraft-thruster systems are also presented.

  15. ARES: A System for Real-Time Operational and Tactical Decision Support

    DTIC Science & Technology

    1986-12-01

    In B]LE LCLGf. 9 NAVAL POSTGRADUATE SCHOOL Monterey, California Vi,-. %*.. THESIS - ’ A RE S A SYSTEM -OR REAL- 1I I .-.. --- OPERATIONAL AND...able) aval Postgraduate School 54 Naval Postgraduate School NN DRESS (City,. State,. and ZIP Code) 7b ADDRESS (City,. State,. and ZIP Code...SUBJECT TERMS (Continue on reverse if necessaty and identify by block number) LD GROUP SUB-GROUP Decision Support System, Logistics Model, Operational

  16. A proposal of monitoring and forecasting system for crustal activity in and around Japan using a large-scale high-fidelity finite element simulation codes

    NASA Astrophysics Data System (ADS)

    Hori, T.; Ichimura, T.

    2015-12-01

    Here we propose a system for monitoring and forecasting of crustal activity, especially great interplate earthquake generation and its preparation processes in subduction zone. Basically, we model great earthquake generation as frictional instability on the subjecting plate boundary. So, spatio-temporal variation in slip velocity on the plate interface should be monitored and forecasted. Although, we can obtain continuous dense surface deformation data on land and partly at the sea bottom, the data obtained are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1)&(2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2014, SC14) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x 30 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, this meeting) has improved the high fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we will apply it for 3D heterogeneous structure with the high fidelity FE model.

  17. GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT

    NASA Astrophysics Data System (ADS)

    Strubbe, David A.

    GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.

  18. Heart Pump Design for Cleveland Clinic Foundation

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Through a Lewis CommTech Program project with the Cleveland Clinic Foundation, the NASA Lewis Research Center is playing a key role in the design and development of a permanently implantable, artificial heart pump assist device. Known as the Innovative Ventricular Assist System (IVAS), this device will take on the pumping role of the damaged left ventricle of the heart. The key part of the IVAS is a nonpulsatile (continuous flow) artificial heart pump with centrifugal impeller blades, driven by an electric motor. Lewis is part of an industry and academia team, led by the Ohio Aerospace Institute (OAI), that is working with the Cleveland Clinic Foundation to make IVAS a reality. This device has the potential to save tens of thousands of lives each year, since 80 percent of heart attack victims suffer irreversible damage to the left ventricle, the part of the heart that does most of the pumping. Impeller blade design codes and flow-modeling analytical codes will be used in the project. These codes were developed at Lewis for the aerospace industry but will be applicable to the IVAS design project. The analytical codes, which currently simulate the flow through the compressor and pump systems, will be used to simulate the flow within the blood pump in the artificial heart assist device. The Interdisciplinary Technology Office heads up Lewis' efforts in the IVAS project. With the aid of numerical modeling, the blood pump will address many design issues, including some fluid-dynamic design considerations that are unique to the properties of blood. Some of the issues that will be addressed in the design process include hemolysis, deposition, recirculation, pump efficiency, rotor thrust balance, and bearing lubrication. Optimum pumping system performance will be achieved by modeling all the interactions between the pump components. The interactions can be multidisciplinary and, therefore, are influenced not only by the fluid dynamics of adjacent components but also by thermal and structural effects. Lewis-developed flow-modeling codes to be used in the pump simulations will include a one-dimensional code and an incompressible three-dimensional Navier-Stokes flow code. These codes will analyze the prototype pump designed by the Cleveland Clinic Foundation. With an improved understanding of the flow phenomena within the prototype pump, design changes to improve the performance of the pump system can be verified by computer prior to fabrication in order to reduce risks. The use of Lewis flow modeling codes during the design and development process will improve pump system performance and reduce the number of prototypes built in the development phase. The first phase of the IVAS project is to fully develop the prototype in a laboratory environment that uses a water/glycerin mixture as the surrogate fluid to simulate blood. A later phase of the project will include testing in animals for final validation. Lewis will be involved in the IVAS project for 3 to 5 years.

  19. Fault trees for decision making in systems analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Howard E.

    1975-10-09

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less

  20. Incorporation of Condensation Heat Transfer in a Flow Network Code

    NASA Technical Reports Server (NTRS)

    Anthony, Miranda; Majumdar, Alok; McConnaughey, Paul K. (Technical Monitor)

    2001-01-01

    In this paper we have investigated the condensation of water vapor in a short tube. A numerical model of condensation heat transfer was incorporated in a flow network code. The flow network code that we have used in this paper is Generalized Fluid System Simulation Program (GFSSP). GFSSP is a finite volume based flow network code. Four different condensation models were presented in the paper. Soliman's correlation has been found to be the most stable in low flow rates which is of particular interest in this application. Another highlight of this investigation is conjugate or coupled heat transfer between solid or fluid. This work was done in support of NASA's International Space Station program.

  1. Multiple Access Interference Reduction Using Received Response Code Sequence for DS-CDMA UWB System

    NASA Astrophysics Data System (ADS)

    Toh, Keat Beng; Tachikawa, Shin'ichi

    This paper proposes a combination of novel Received Response (RR) sequence at the transmitter and a Matched Filter-RAKE (MF-RAKE) combining scheme receiver system for the Direct Sequence-Code Division Multiple Access Ultra Wideband (DS-CDMA UWB) multipath channel model. This paper also demonstrates the effectiveness of the RR sequence in Multiple Access Interference (MAI) reduction for the DS-CDMA UWB system. It suggests that by using conventional binary code sequence such as the M sequence or the Gold sequence, there is a possibility of generating extra MAI in the UWB system. Therefore, it is quite difficult to collect the energy efficiently although the RAKE reception method is applied at the receiver. The main purpose of the proposed system is to overcome the performance degradation for UWB transmission due to the occurrence of MAI during multiple accessing in the DS-CDMA UWB system. The proposed system improves the system performance by improving the RAKE reception performance using the RR sequence which can reduce the MAI effect significantly. Simulation results verify that significant improvement can be obtained by the proposed system in the UWB multipath channel models.

  2. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  3. Identification and Analysis of Critical Gaps in Nuclear Fuel Cycle Codes Required by the SINEMA Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adrian Miron; Joshua Valentine; John Christenson

    2009-10-01

    The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less

  4. Topological quantum error correction in the Kitaev honeycomb model

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    TESP combines existing domain simulators in the electric power grid, with new transactive agents, growth models and evaluation scripts. The existing domain simulators include GridLAB-D for the distribution grid and single-family residential buildings, MATPOWER for transmission and bulk generation, and EnergyPlus for large buildings. More are planned for subsequent versions of TESP. The new elements are: TEAgents - simulate market participants and transactive systems for market clearing. Some of this functionality was extracted from GridLAB-D and implemented in Python for customization by PNNL and others; Growth Model - a means for simulating system changes over a multiyear period, including bothmore » normal load growth and specific investment decisions. Customizable in Python code; and Evaluation Script - a means of evaluating different transactive systems through customizable post-processing in Python code. TESP provides a method for other researchers and vendors to design transactive systems, and test them in a virtual environment. It allows customization of the key components by modifying Python code.« less

  6. A proposal of monitoring and forecasting system for crustal activity in and around Japan using a large-scale high-fidelity finite element simulation codes

    NASA Astrophysics Data System (ADS)

    Hori, Takane; Ichimura, Tsuyoshi; Takahashi, Narumi

    2017-04-01

    Here we propose a system for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. Although, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2015, SC15) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Fujita et al. (2016, SC16) has improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, AGU Fall Meeting) has improved the high-fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model.

  7. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  8. Prediction of plant lncRNA by ensemble machine learning classifiers.

    PubMed

    Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian

    2018-05-02

    In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.

  9. Performance of a Bounce-Averaged Global Model of Super-Thermal Electron Transport in the Earth's Magnetic Field

    NASA Technical Reports Server (NTRS)

    McGuire, Tim

    1998-01-01

    In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.

  10. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  11. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    NASA Astrophysics Data System (ADS)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  12. Adaptive distributed source coding.

    PubMed

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  13. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  14. Anomalous Upwelling in Nan Wan: July 2008

    DTIC Science & Technology

    2009-12-01

    Head Ruth H. Preller 7300 Security, Code 1226 Office of Couns sl.Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified...State University (OSU) tidal forcing drives the tidal currents. A global weather forecast model (Navy Operational Global Atmospheric Prediction...system derives its open ocean boundary conditions from NRL global NCOM (Navy Co- astal Ocean Model) (Rhodes et al. 2002) that operates daily

  15. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  16. Modeling the Effects of Ice Accretion on the Low Pressure Compressor and the Overall Turbofan Engine System Performance

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Jorgenson, Philip C. E.; Wright, William B.

    2011-01-01

    The focus of this study is on utilizing a mean line compressor flow analysis code coupled to an engine system thermodynamic code, to estimate the effects of ice accretion on the low pressure compressor, and quantifying its effects on the engine system throughout a notional flight trajectory. In this paper a temperature range in which engine icing would occur was assumed. This provided a mechanism to locate potential component icing sites and allow the computational tools to add blockages due to ice accretion in a parametric fashion. Ultimately the location and level of blockage due to icing would be provided by an ice accretion code. To proceed, an engine system modeling code and a mean line compressor flow analysis code were utilized to calculate the flow conditions in the fan-core and low pressure compressor and to identify potential locations within the compressor where ice may accrete. In this study, an "additional blockage" due to the accretion of ice on the metal surfaces, has been added to the baseline aerodynamic blockage due to boundary layer, as well as the blade metal blockage. Once the potential locations of ice accretion are identified, the levels of additional blockage due to accretion were parametrically varied to estimate the effects on the low pressure compressor blade row performance operating within the engine system environment. This study includes detailed analysis of compressor and engine performance during cruise and descent operating conditions at several altitudes within the notional flight trajectory. The purpose of this effort is to develop the computer codes to provide a predictive capability to forecast the onset of engine icing events, such that they could ultimately help in the avoidance of these events.

  17. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    FY 2013 annual report focuses on the following areas: vehicle modeling and simulation, component and systems evaluations, laboratory and field evaluations, codes and standards, industry projects, and vehicle systems optimization.

  19. Higher order turbulence closure models

    NASA Technical Reports Server (NTRS)

    Amano, Ryoichi S.; Chai, John C.; Chen, Jau-Der

    1988-01-01

    Theoretical models are developed and numerical studies conducted on various types of flows including both elliptic and parabolic. The purpose of this study is to find better higher order closure models for the computations of complex flows. This report summarizes three new achievements: (1) completion of the Reynolds-stress closure by developing a new pressure-strain correlation; (2) development of a parabolic code to compute jets and wakes; and, (3) application to a flow through a 180 deg turnaround duct by adopting a boundary fitted coordinate system. In the above mentioned models near-wall models are developed for pressure-strain correlation and third-moment, and incorporated into the transport equations. This addition improved the results considerably and is recommended for future computations. A new parabolic code to solve shear flows without coordinate tranformations is developed and incorporated in this study. This code uses the structure of the finite volume method to solve the governing equations implicitly. The code was validated with the experimental results available in the literature.

  20. Radiative Transfer Modeling in Proto-planetary Disks

    NASA Astrophysics Data System (ADS)

    Kasper, David; Jang-Condell, Hannah; Kloster, Dylan

    2016-01-01

    Young Stellar Objects (YSOs) are rich astronomical research environments. Planets form in circumstellar disks of gas and dust around YSOs. With ever increasing capabilities of the observational instruments designed to look at these proto-planetary disks, most notably GPI, SPHERE, and ALMA, more accurate interfaces must be made to connect modeling of the disks with observation. PaRTY (Parallel Radiative Transfer in YSOs) is a code developed previously to model the observable density and temperature structure of such a disk by self-consistently calculating the structure of the disk based on radiative transfer physics. We present upgrades we are implementing to the PaRTY code to improve its accuracy and flexibility. These upgrades include: creating a two-sided disk model, implementing a spherical coordinate system, and implementing wavelength-dependent opacities. These upgrades will address problems in the PaRTY code of infinite optical thickness, calculation under/over-resolution, and wavelength-independent photon penetration depths, respectively. The upgraded code will be used to better model disk perturbations resulting from planet formation.

  1. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  2. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  3. Similarity-based prediction for Anatomical Therapeutic Chemical classification of drugs by integrating multiple data sources.

    PubMed

    Liu, Zhongyang; Guo, Feifei; Gu, Jiangyong; Wang, Yong; Li, Yang; Wang, Dan; Lu, Liang; Li, Dong; He, Fuchu

    2015-06-01

    Anatomical Therapeutic Chemical (ATC) classification system, widely applied in almost all drug utilization studies, is currently the most widely recognized classification system for drugs. Currently, new drug entries are added into the system only on users' requests, which leads to seriously incomplete drug coverage of the system, and bioinformatics prediction is helpful during this process. Here we propose a novel prediction model of drug-ATC code associations, using logistic regression to integrate multiple heterogeneous data sources including chemical structures, target proteins, gene expression, side-effects and chemical-chemical associations. The model obtains good performance for the prediction not only on ATC codes of unclassified drugs but also on new ATC codes of classified drugs assessed by cross-validation and independent test sets, and its efficacy exceeds previous methods. Further to facilitate the use, the model is developed into a user-friendly web service SPACE ( S: imilarity-based P: redictor of A: TC C: od E: ), which for each submitted compound, will give candidate ATC codes (ranked according to the decreasing probability_score predicted by the model) together with corresponding supporting evidence. This work not only contributes to knowing drugs' therapeutic, pharmacological and chemical properties, but also provides clues for drug repositioning and side-effect discovery. In addition, the construction of the prediction model also provides a general framework for similarity-based data integration which is suitable for other drug-related studies such as target, side-effect prediction etc. The web service SPACE is available at http://www.bprc.ac.cn/space. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Development and application of structural dynamics analysis capabilities

    NASA Technical Reports Server (NTRS)

    Heinemann, Klaus W.; Hozaki, Shig

    1994-01-01

    Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.

  5. Validation of a Three-Dimensional Ablation and Thermal Response Simulation Code

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.; Gokcen, Tahir

    2010-01-01

    The 3dFIAT code simulates pyrolysis, ablation, and shape change of thermal protection materials and systems in three dimensions. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid system to simulate the shape change due to surface recession. This work is the first part of a code validation study for new capabilities that were added to 3dFIAT. These expanded capabilities include a multi-block moving grid system and an orthotropic thermal conductivity model. This paper focuses on conditions with minimal shape change in which the fluid/solid coupling is not necessary. Two groups of test cases of 3dFIAT analyses of Phenolic Impregnated Carbon Ablator in an arc-jet are presented. In the first group, axisymmetric iso-q shaped models are studied to check the accuracy of three-dimensional multi-block grid system. In the second group, similar models with various through-the-thickness conductivity directions are examined. In this group, the material thermal response is three-dimensional, because of the carbon fiber orientation. Predictions from 3dFIAT are presented and compared with arcjet test data. The 3dFIAT predictions agree very well with thermocouple data for both groups of test cases.

  6. Galen: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    1999-01-01

    GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.

  7. Design applications for supercomputers

    NASA Technical Reports Server (NTRS)

    Studerus, C. J.

    1987-01-01

    The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.

  8. Development of a new EMP code at LANL

    NASA Astrophysics Data System (ADS)

    Colman, J. J.; Roussel-Dupré, R. A.; Symbalisty, E. M.; Triplett, L. A.; Travis, B. J.

    2006-05-01

    A new code for modeling the generation of an electromagnetic pulse (EMP) by a nuclear explosion in the atmosphere is being developed. The source of the EMP is the Compton current produced by the prompt radiation (γ-rays, X-rays, and neutrons) of the detonation. As a first step in building a multi- dimensional EMP code we have written three kinetic codes, Plume, Swarm, and Rad. Plume models the transport of energetic electrons in air. The Plume code solves the relativistic Fokker-Planck equation over a specified energy range that can include ~ 3 keV to 50 MeV and computes the resulting electron distribution function at each cell in a two dimensional spatial grid. The energetic electrons are allowed to transport, scatter, and experience Coulombic drag. Swarm models the transport of lower energy electrons in air, spanning 0.005 eV to 30 keV. The swarm code performs a full 2-D solution to the Boltzmann equation for electrons in the presence of an applied electric field. Over this energy range the relevant processes to be tracked are elastic scattering, three body attachment, two body attachment, rotational excitation, vibrational excitation, electronic excitation, and ionization. All of these occur due to collisions between the electrons and neutral bodies in air. The Rad code solves the full radiation transfer equation in the energy range of 1 keV to 100 MeV. It includes effects of photo-absorption, Compton scattering, and pair-production. All of these codes employ a spherical coordinate system in momentum space and a cylindrical coordinate system in configuration space. The "z" axis of the momentum and configuration spaces is assumed to be parallel and we are currently also assuming complete spatial symmetry around the "z" axis. Benchmarking for each of these codes will be discussed as well as the way forward towards an integrated modern EMP code.

  9. An Energy Model of Place Cell Network in Three Dimensional Space.

    PubMed

    Wang, Yihong; Xu, Xuying; Wang, Rubin

    2018-01-01

    Place cells are important elements in the spatial representation system of the brain. A considerable amount of experimental data and classical models are achieved in this area. However, an important question has not been addressed, which is how the three dimensional space is represented by the place cells. This question is preliminarily surveyed by energy coding method in this research. Energy coding method argues that neural information can be expressed by neural energy and it is convenient to model and compute for neural systems due to the global and linearly addable properties of neural energy. Nevertheless, the models of functional neural networks based on energy coding method have not been established. In this work, we construct a place cell network model to represent three dimensional space on an energy level. Then we define the place field and place field center and test the locating performance in three dimensional space. The results imply that the model successfully simulates the basic properties of place cells. The individual place cell obtains unique spatial selectivity. The place fields in three dimensional space vary in size and energy consumption. Furthermore, the locating error is limited to a certain level and the simulated place field agrees to the experimental results. In conclusion, this is an effective model to represent three dimensional space by energy method. The research verifies the energy efficiency principle of the brain during the neural coding for three dimensional spatial information. It is the first step to complete the three dimensional spatial representing system of the brain, and helps us further understand how the energy efficiency principle directs the locating, navigating, and path planning function of the brain.

  10. An abstract model of rogue code insertion into radio frequency wireless networks. The effects of computer viruses on the Program Management Office

    NASA Astrophysics Data System (ADS)

    Feudo, Christopher V.

    1994-04-01

    This dissertation demonstrates that inadequately protected wireless LANs are more vulnerable to rogue program attack than traditional LANs. Wireless LANs not only run the same risks as traditional LANs, but they also run additional risks associated with an open transmission medium. Intruders can scan radio waves and, given enough time and resources, intercept, analyze, decipher, and reinsert data into the transmission medium. This dissertation describes the development and instantiation of an abstract model of the rogue code insertion process into a DOS-based wireless communications system using radio frequency (RF) atmospheric signal transmission. The model is general enough to be applied to widely used target environments such as UNIX, Macintosh, and DOS operating systems. The methodology and three modules, the prober, activator, and trigger modules, to generate rogue code and insert it into a wireless LAN were developed to illustrate the efficacy of the model. Also incorporated into the model are defense measures against remotely introduced rogue programs and a cost-benefit analysis that determined that such defenses for a specific environment were cost-justified.

  11. Evolutionary models of rotating dense stellar systems: challenges in software and hardware

    NASA Astrophysics Data System (ADS)

    Fiestas, Jose

    2016-02-01

    We present evolutionary models of rotating self-gravitating systems (e.g. globular clusters, galaxy cores). These models are characterized by the presence of initial axisymmetry due to rotation. Central black hole seeds are alternatively included in our models, and black hole growth due to consumption of stellar matter is simulated until the central potential dominates the kinematics in the core. Goal is to study the long-term evolution (~ Gyr) of relaxed dense stellar systems, which deviate from spherical symmetry, their morphology and final kinematics. With this purpose, we developed a 2D Fokker-Planck analytical code, which results we confirm by detailed N-Body techniques, applying a high performance code, developed for GPU machines. We compare our models to available observations of galactic rotating globular clusters, and conclude that initial rotation modifies significantly the shape and lifetime of these systems, and can not be neglected in studying the evolution of globular clusters, and the galaxy itself.

  12. National Combustion Code, a Multidisciplinary Combustor Design System, Will Be Transferred to the Commercial Sector

    NASA Technical Reports Server (NTRS)

    Steele, Gynelle C.

    1999-01-01

    The NASA Lewis Research Center and Flow Parametrics will enter into an agreement to commercialize the National Combustion Code (NCC). This multidisciplinary combustor design system utilizes computer-aided design (CAD) tools for geometry creation, advanced mesh generators for creating solid model representations, a common framework for fluid flow and structural analyses, modern postprocessing tools, and parallel processing. This integrated system can facilitate and enhance various phases of the design and analysis process.

  13. 77 FR 25904 - Acequinocyl; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-02

    .../oppefed1/models/water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System... Classification System (NAICS) codes have been provided to assist you and others in determining whether this... comments received in response to the notice of filing. Based upon review of the data supporting the...

  14. 75 FR 40741 - Hexythiazox; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-14

    .../oppefed1/models/water/index.htm . Based on the Pesticide Root Zone Model /Exposure Analysis Modeling System... affected. The North American Industrial Classification System (NAICS) codes have been provided to assist... review of the data supporting the petition, EPA issued a notice in the Federal Register of March 17, 2010...

  15. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  16. Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems

    NASA Astrophysics Data System (ADS)

    de Almeida, André LF; Favier, Gérard

    2013-12-01

    This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.

  17. Upgrades of Two Computer Codes for Analysis of Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Liou, Meng-Sing

    2005-01-01

    Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.

  18. Calculation of natural convection test at Phenix using the NETFLOW++ code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mochizuki, H.; Kikuchi, N.; Li, S.

    2012-07-01

    The present paper describes modeling and analyses of a natural convection of the pool-type fast breeder reactor Phenix. The natural convection test was carried out as one of the End of Life Tests of the Phenix. Objective of the present study is to assess the applicability of the NETFLOW++ code which has been verified thus far using various water facilities and validated using the plant data of the loop-type FBR 'Monju' and the loop-type experimental fast reactor 'Joyo'. The Phenix primary heat transport system is modeled based on the benchmark documents available from IAEA. The calculational model consists of onlymore » the primary heat transport system with boundary conditions on the secondary-side of IHX. The coolant temperature at the primary pump inlet, the primary coolant temperature at the IHX inlet and outlet, the secondary coolant temperatures and other parameters are calculated by the code where the heat transfer between the hot and cold pools is explicitly taken into account. A model including the secondary and tertiary systems was prepared, and the calculated results also agree well with the measured data in general. (authors)« less

  19. A Proposal of Monitoring and Forecasting Method for Crustal Activity in and around Japan with 3-dimensional Heterogeneous Medium Using a Large-scale High-fidelity Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.

    2017-12-01

    Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.

  20. A Hypermedia Model for Teaching Technology.

    ERIC Educational Resources Information Center

    Savage, Ernest N.

    Ohio's Model Industrial Technology Systems (MITS) project was initiated in 1987 to achieve the following: identify good activities in the areas of physical, communication, and bio-related technology; standardize the activities' format; and provide a coding system for their eventual use in a hypermedia system. To date, 220 activities have been…

  1. Methodology, status and plans for development and assessment of Cathare code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less

  2. Transformation of Graphical ECA Policies into Executable PonderTalk Code

    NASA Astrophysics Data System (ADS)

    Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard

    Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.

  3. 77 FR 21670 - Acibenzolar-S-

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-11

    .../Exposure Analysis Modeling System and Screening Concentration in Ground Water (SCI-GROW) models, the... Classification System (NAICS) codes have been provided to assist you and others in determining whether this... reliable information.'' This includes exposure through drinking water and in residential settings, but does...

  4. Proposed standards for peer-reviewed publication of computer code

    USDA-ARS?s Scientific Manuscript database

    Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...

  5. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  6. Environmental Fluid Dynamics Code

    EPA Science Inventory

    The Environmental Fluid Dynamics Code (EFDC)is a state-of-the-art hydrodynamic model that can be used to simulate aquatic systems in one, two, and three dimensions. It has evolved over the past two decades to become one of the most widely used and technically defensible hydrodyn...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    BRISC is a developmental prototype for a nextgeneration “systems-level” integrated performance and safety code (IPSC) for nuclear reactors. Its development served to demonstrate how a lightweight multi-physics coupling approach can be used to tightly couple the physics models in several different physics codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled “burner” nuclear reactor. For example, the RIO Fluid Flow and Heat transfer code developed at Sandia (SNL: Chris Moen, Dept. 08005) is used in BRISC to model fluid flow and heat transfer, as well as conduction heat transfermore » in solids. Because BRISC is a prototype, its most practical application is as a foundation or starting point for developing a true production code. The sub-codes and the associated models and correlations currently employed within BRISC were chosen to cover the required application space and demonstrate feasibility, but were not optimized or validated against experimental data within the context of their use in BRISC.« less

  8. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  9. AutoBayes Program Synthesis System Users Manual

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd

    2008-01-01

    Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.

  10. PopCORN: Hunting down the differences between binary population synthesis codes

    NASA Astrophysics Data System (ADS)

    Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.

    2014-02-01

    Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org

  11. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  12. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  13. Building Interactive Simulations in Web Pages without Programming.

    PubMed

    Mailen Kootsey, J; McAuley, Grant; Bernal, Julie

    2005-01-01

    A software system is described for building interactive simulations and other numerical calculations in Web pages. The system is based on a new Java-based software architecture named NumberLinX (NLX) that isolates each function required to build the simulation so that a library of reusable objects could be assembled. The NLX objects are integrated into a commercial Web design program for coding-free page construction. The model description is entered through a wizard-like utility program that also functions as a model editor. The complete system permits very rapid construction of interactive simulations without coding. A wide range of applications are possible with the system beyond interactive calculations, including remote data collection and processing and collaboration over a network.

  14. FASTGRASS implementation in BISON and Fission gas behavior characterization in UO 2 and connection to validating MARMOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Di; Mo, Kun; Ye, Bei

    2015-09-30

    This activity is supported by the US Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Product Line (FPL). Two major accomplishments in FY 15 are summarized in this report: (1) implementation of the FASTGRASS module in the BISON code; and (2) a Xe implantation experiment for large-grained UO 2. Both BISON AND MARMOT codes have been developed by Idaho National Laboratory (INL) to enable next generation fuel performance modeling capability as part of the NEAMS Program FPL. To contribute to the development of the Moose-Bison-Marmot (MBM) code suite, we have implemented the FASTGRASS fission gas model as a module inmore » the BISON code. Based on rate theory formulations, the coupled FASTGRASS module in BISON is capable of modeling LWR oxide fuel fission gas behavior and fission gas release. In addition, we conducted a Xe implantation experiment at the Argonne Tandem Linac Accelerator System (ATLAS) in order to produce the needed UO 2 samples with desired bubble morphology. With these samples, further experiments to study the fission gas diffusivity are planned to provide validation data for the Fission Gas Release Model in MARMOT codes.« less

  15. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herman, M.; Capote, R.; Carlson, B.V.

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approachmore » (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted files using the accompanying code EMPEND and completed with neutron resonances extracted from the existing evaluations. The package contains the full EXFOR (CSISRS) library of experimental reaction data that are automatically retrieved during the calculations. Publication quality graphs can be obtained using the powerful and flexible plotting package ZVView. The graphic user interface, written in Tcl/Tk, provides for easy operation of the system. This paper describes the capabilities of the code, outlines physical models and indicates parameter libraries used by EMPIRE to predict reaction cross sections and spectra, mainly for nucleon-induced reactions. Selected applications of EMPIRE are discussed, the most important being an extensive use of the code in evaluations of neutron reactions for the new US library ENDF/B-VII.0. Future extensions of the system are outlined, including neutron resonance module as well as capabilities of generating covariances, using both KALMAN and Monte-Carlo methods, that are still being advanced and refined.« less

  16. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  17. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1992-01-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  18. Evaluation and utilization of beam simulation codes for the SNS ion source and low energy beam transport developmenta)

    NASA Astrophysics Data System (ADS)

    Han, B. X.; Welton, R. F.; Stockli, M. P.; Luciano, N. P.; Carmichael, J. R.

    2008-02-01

    Beam simulation codes PBGUNS, SIMION, and LORENTZ-3D were evaluated by modeling the well-diagnosed SNS base line ion source and low energy beam transport (LEBT) system. Then, an investigation was conducted using these codes to assist our ion source and LEBT development effort which is directed at meeting the SNS operational and also the power-upgrade project goals. A high-efficiency H- extraction system as well as magnetic and electrostatic LEBT configurations capable of transporting up to 100mA is studied using these simulation tools.

  19. Proposal for a new content model for the Austrian Procedure Catalogue.

    PubMed

    Neururer, Sabrina B; Pfeiffer, Karl P

    2013-01-01

    The Austrian Procedure Catalogue is used for procedure coding in Austria. Its architecture and content has some major weaknesses. The aim of this study is the presentation of a new potential content model for this classification system consisting of main characteristics of health interventions. It is visualized using a UML class diagram. Based on this proposition, an implementation of an ontology for procedure coding is planned.

  20. [Hybrid 3-D rendering of the thorax and surface-based virtual bronchoscopy in surgical and interventional therapy control].

    PubMed

    Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D

    2001-07-01

    The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.

  1. Automotive Gas Turbine Power System-Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    1997-01-01

    An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.

  2. Modeling and Implementation of Cattle/Beef Supply Chain Traceability Using a Distributed RFID-Based Framework in China.

    PubMed

    Liang, Wanjie; Cao, Jing; Fan, Yan; Zhu, Kefeng; Dai, Qiwei

    2015-01-01

    In recent years, traceability systems have been developed as effective tools for improving the transparency of supply chains, thereby guaranteeing the quality and safety of food products. In this study, we proposed a cattle/beef supply chain traceability model and a traceability system based on radio frequency identification (RFID) technology and the EPCglobal network. First of all, the transformations of traceability units were defined and analyzed throughout the cattle/beef chain. Secondly, we described the internal and external traceability information acquisition, transformation, and transmission processes throughout the beef supply chain in detail, and explained a methodology for modeling traceability information using the electronic product code information service (EPCIS) framework. Then, the traceability system was implemented based on Fosstrak and FreePastry software packages, and animal ear tag code and electronic product code (EPC) were employed to identify traceability units. Finally, a cattle/beef supply chain included breeding business, slaughter and processing business, distribution business and sales outlet was used as a case study to evaluate the beef supply chain traceability system. The results demonstrated that the major advantages of the traceability system are the effective sharing of information among business and the gapless traceability of the cattle/beef supply chain.

  3. Modeling and Implementation of Cattle/Beef Supply Chain Traceability Using a Distributed RFID-Based Framework in China

    PubMed Central

    Liang, Wanjie; Cao, Jing; Fan, Yan; Zhu, Kefeng; Dai, Qiwei

    2015-01-01

    In recent years, traceability systems have been developed as effective tools for improving the transparency of supply chains, thereby guaranteeing the quality and safety of food products. In this study, we proposed a cattle/beef supply chain traceability model and a traceability system based on radio frequency identification (RFID) technology and the EPCglobal network. First of all, the transformations of traceability units were defined and analyzed throughout the cattle/beef chain. Secondly, we described the internal and external traceability information acquisition, transformation, and transmission processes throughout the beef supply chain in detail, and explained a methodology for modeling traceability information using the electronic product code information service (EPCIS) framework. Then, the traceability system was implemented based on Fosstrak and FreePastry software packages, and animal ear tag code and electronic product code (EPC) were employed to identify traceability units. Finally, a cattle/beef supply chain included breeding business, slaughter and processing business, distribution business and sales outlet was used as a case study to evaluate the beef supply chain traceability system. The results demonstrated that the major advantages of the traceability system are the effective sharing of information among business and the gapless traceability of the cattle/beef supply chain. PMID:26431340

  4. Dual coding: a cognitive model for psychoanalytic research.

    PubMed

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code formulation, and applying an investigative methodology derived from experimental cognitive psychology, a new approach to the verification of interpretations is possible. Some constructions of a patient's story may be seen as more accurate than others, by virtue of their linkage to stored perceptual representations in long-term memory. We can demonstrate that such linking has occurred in functional or operational terms--through evaluating the representation of imagistic content in the patient's speech.

  5. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 8: Cooling Flow/heat Transfer Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Topp, David A.; Heidegger, Nathan J.; Delaney, Robert A.

    1994-01-01

    The focus of this task was to validate the ADPAC code for heat transfer calculations. To accomplish this goal, the ADPAC code was modified to allow for a Cartesian coordinate system capability and to add boundary conditions to handle spanwise periodicity and transpiration boundaries. The primary validation case was the film cooled C3X vane. The cooling hole modeling included both a porous region and grid in each discrete hold. Predictions for these models as well as smooth wall compared well with the experimental data.

  6. Advances in Geologic Disposal System Modeling and Application to Crystalline Rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariner, Paul E.; Stein, Emily R.; Frederick, Jennifer M.

    The Used Fuel Disposition Campaign (UFDC) of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (OFCT) is conducting research and development (R&D) on geologic disposal of used nuclear fuel (UNF) and high-level nuclear waste (HLW). Two of the high priorities for UFDC disposal R&D are design concept development and disposal system modeling (DOE 2011). These priorities are directly addressed in the UFDC Generic Disposal Systems Analysis (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic mediamore » (e.g., salt, granite, clay, and deep borehole disposal). This report describes specific GDSA activities in fiscal year 2016 (FY 2016) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code and the Dakota uncertainty sampling and propagation code. Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through engineered barriers and natural geologic barriers to the biosphere. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.« less

  7. Reactive transport modeling of stable carbon isotope fractionation in a multi-phase multi-component system during carbon sequestration

    DOE PAGES

    Zhang, Shuo; DePaolo, Donald J.; Zheng, Liange; ...

    2014-12-31

    Carbon stable isotopes can be used in characterization and monitoring of CO 2 sequestration sites to track the migration of the CO 2 plume and identify leakage sources, and to evaluate the chemical reactions that take place in the CO 2-water-rock system. However, there are few tools available to incorporate stable isotope information into flow and transport codes used for CO 2 sequestration problems. We present a numerical tool for modeling the transport of stable carbon isotopes in multiphase reactive systems relevant to geologic carbon sequestration. The code is an extension of the reactive transport code TOUGHREACT. The transport modulemore » of TOUGHREACT was modified to include separate isotopic species of CO 2 gas and dissolved inorganic carbon (CO 2, CO 3 2-, HCO 3 -,…). Any process of transport or reaction influencing a given carbon species also influences its isotopic ratio. Isotopic fractionation is thus fully integrated within the dynamic system. The chemical module and database have been expanded to include isotopic exchange and fractionation between the carbon species in both gas and aqueous phases. The performance of the code is verified by modeling ideal systems and comparing with theoretical results. Efforts are also made to fit field data from the Pembina CO 2 injection project in Canada. We show that the exchange of carbon isotopes between dissolved and gaseous carbon species combined with fluid flow and transport, produce isotopic effects that are significantly different from simple two-component mixing. These effects are important for understanding the isotopic variations observed in field demonstrations.« less

  8. Thermoelectric pump performance analysis computer code

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1973-01-01

    A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.

  9. Calculation of the small scale self-focusing ripple gain spectrum for the CYCLOPS laser system: a status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleck, J.A. Jr.; Morris, J.R.; Thompson, P.F.

    1976-10-01

    The FLAC code (Fourier Laser Amplifier Code) was used to simulate the CYCLOPS laser system up to the third B-module and to calculate the maximum ripple gain spectrum. The model of this portion of CYCLOPS consists of 33 segments that correspond to 20 optical elements (simulation of the cell requires 2 segments and 12 external air spaces). (MHR)

  10. Large liquid rocket engine transient performance simulation system

    NASA Technical Reports Server (NTRS)

    Mason, J. R.; Southwick, R. D.

    1989-01-01

    Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.

  11. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less

  12. ARCADIA{sup R} - A New Generation of Coupled Neutronics / Core Thermal- Hydraulics Code System at AREVA NP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas

    2007-07-01

    Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less

  13. Automating the generation of finite element dynamical cores with Firedrake

    NASA Astrophysics Data System (ADS)

    Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas

    2017-04-01

    The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.

  14. Shuttle Global Positioning System (GPS) system design study

    NASA Technical Reports Server (NTRS)

    Nilsen, P. W.

    1979-01-01

    The various integration problems in the Shuttle GPS system were investigated. The analysis of the Shuttle GPS link was studied. A preamplifier was designed since the Shuttle GPS antennas must be located remotely from the receiver. Several GPS receiver architecture trade-offs were discussed. The Shuttle RF harmonics and intermode that fall within the GPS receiver bandwidth were analyzed. The GPS PN code acquisition was examined. Since the receiver clock strongly affects both GPS carrier and code acquisition performance, a clock model was developed.

  15. The investigation of bandwidth efficient coding and modulation techniques

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The New Mexico State University Center for Space Telemetering and Telecommunications systems has been, and is currently, engaged in the investigation of trellis-coded modulation (TCM) communication systems. In particular, TCM utilizing M-ary phase shift keying is being studied. The study of carrier synchronization in a TCM environment, or in MPSK systems in general, has been one of the two main thrusts of this grant. This study has involved both theoretical modelling and software simulation of the carrier synchronization problem.

  16. A hadron-nucleus collision event generator for simulations at intermediate energies

    NASA Astrophysics Data System (ADS)

    Ackerstaff, K.; Bisplinghoff, J.; Bollmann, R.; Cloth, P.; Diehl, O.; Dohrmann, F.; Drüke, V.; Eisenhardt, S.; Engelhardt, H. P.; Ernst, J.; Eversheim, P. D.; Filges, D.; Fritz, S.; Gasthuber, M.; Gebel, R.; Greiff, J.; Gross, A.; Gross-Hardt, R.; Hinterberger, F.; Jahn, R.; Lahr, U.; Langkau, R.; Lippert, G.; Maschuw, R.; Mayer-Kuckuk, T.; Mertler, G.; Metsch, B.; Mosel, F.; Paetz gen. Schieck, H.; Petry, H. R.; Prasuhn, D.; von Przewoski, B.; Rohdjeß, H.; Rosendaal, D.; Roß, U.; von Rossen, P.; Scheid, H.; Schirm, N.; Schulz-Rojahn, M.; Schwandt, F.; Scobel, W.; Sterzenbach, G.; Theis, D.; Weber, J.; Wellinghausen, A.; Wiedmann, W.; Woller, K.; Ziegler, R.; EDDA-Collaboration

    2002-10-01

    Several available codes for hadronic event generation and shower simulation are discussed and their predictions are compared to experimental data in order to obtain a satisfactory description of hadronic processes in Monte Carlo studies of detector systems for medium energy experiments. The most reasonable description is found for the intra-nuclear-cascade (INC) model of Bertini which employs microscopic description of the INC, taking into account elastic and inelastic pion-nucleon and nucleon-nucleon scattering. The isobar model of Sternheimer and Lindenbaum is used to simulate the inelastic elementary collisions inside the nucleus via formation and decay of the Δ33-resonance which, however, limits the model at higher energies. To overcome this limitation, the INC model has been extended by using the resonance model of the HADRIN code, considering all resonances in elementary collisions contributing more than 2% to the total cross-section up to kinetic energies of 5 GeV. In addition, angular distributions based on phase shift analysis are used for elastic nucleon-nucleon as well as elastic and charge exchange pion-nucleon scattering. Also kaons and antinucleons can be treated as projectiles. Good agreement with experimental data is found predominantly for lower projectile energies, i.e. in the regime of the Bertini code. The original as well as the extended Bertini model have been implemented as shower codes into the high energy detector simulation package GEANT-3.14, allowing now its use also in full Monte Carlo studies of detector systems at intermediate energies. The GEANT-3.14 here have been used mainly for its powerful geometry and analysing packages due to the complex EDDA detector system.

  17. Decaf: Decoupled Dataflows for In Situ High-Performance Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreher, M.; Peterka, T.

    Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less

  18. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  19. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  20. Multi-scale modeling of irradiation effects in spallation neutron source materials

    NASA Astrophysics Data System (ADS)

    Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.

    2011-07-01

    Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.

  1. 3D-DIVIMP-HC modeling analysis of methane injection into DIII-D using the DiMES porous plug injector

    NASA Astrophysics Data System (ADS)

    Mu, Y.; McLean, A. G.; Elder, J. D.; Stangeby, P. C.; Bray, B. D.; Brooks, N. H.; Davis, J. W.; Fenstermacher, M. E.; Groth, M.; Lasnier, C. J.; Rudakov, D. L.; Watkins, J. G.; West, W. P.; Wong, C. P. C.

    2009-06-01

    A self-contained gas injection system for the Divertor Material Evaluation System (DiMES) on DIII-D, the porous plug injector (PPI), has been employed for in situ study of chemical erosion in the tokamak divertor environment by injection of CH 4 [A.G. McLean et al., these Proceedings]. A new interpretive code, 3D-DIVIMP-HC, has been developed and applied to the interpretation of the CH, CI, and CII emissions. Particular emphasis is placed on the interpretation of 2D filtered-camera (TV) pictures in CH, CI and CII light taken from a view essentially straight down on the PPI. The code replicates sufficient measurements to conclude that most of the basic elements of the controlling physics and chemistry have been identified and incorporated in the code-model.

  2. A Comparison of Air Force Data Systems

    DTIC Science & Technology

    1993-08-01

    a software cost model, SPQR . This model was chosen because it provides a straightforward means of modeling the enhancements as they V i VII-25 I would...estimated by SPQR (23,917) by $69 per hour for a total of $1,650,273. An additional 10 percent was added for generating or modifying the Middleware...equipment3 SLOC source lines of code SPO System Program Office SPQR System Product Quality Reporting SSC Standard Systems Center SSI system-to-system

  3. Python-Assisted MODFLOW Application and Code Development

    NASA Astrophysics Data System (ADS)

    Langevin, C.

    2013-12-01

    The U.S. Geological Survey (USGS) has a long history of developing and maintaining free, open-source software for hydrological investigations. The MODFLOW program is one of the most popular hydrologic simulation programs released by the USGS, and it is considered to be the most widely used groundwater flow simulation code. MODFLOW was written using a modular design and a procedural FORTRAN style, which resulted in code that could be understood, modified, and enhanced by many hydrologists. The code is fast, and because it uses standard FORTRAN it can be run on most operating systems. Most MODFLOW users rely on proprietary graphical user interfaces for constructing models and viewing model results. Some recent efforts, however, have focused on construction of MODFLOW models using open-source Python scripts. Customizable Python packages, such as FloPy (https://code.google.com/p/flopy), can be used to generate input files, read simulation results, and visualize results in two and three dimensions. Automating this sequence of steps leads to models that can be reproduced directly from original data and rediscretized in space and time. Python is also being used in the development and testing of new MODFLOW functionality. New packages and numerical formulations can be quickly prototyped and tested first with Python programs before implementation in MODFLOW. This is made possible by the flexible object-oriented design capabilities available in Python, the ability to call FORTRAN code from Python, and the ease with which linear systems of equations can be solved using SciPy, for example. Once new features are added to MODFLOW, Python can then be used to automate comprehensive regression testing and ensure reliability and accuracy of new versions prior to release.

  4. Definition of the Semisubmersible Floating System for Phase II of OC4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, A.; Jonkman, J.; Masciola, M.

    Phase II of the Offshore Code Comparison Collaboration Continuation (OC4) project involved modeling of a semisubmersible floating offshore wind system as shown below. This report documents the specifications of the floating system, which were needed by the OC4 participants for building aero-hydro-servo-elastic models.

  5. IDEF3 Formalization Report

    DTIC Science & Technology

    1991-10-01

    SUBJECT TERMS 15. NUMBER OF PAGES engineering management information systems method formalization 60 information engineering process modeling 16 PRICE...CODE information systems requirements definition methods knowlede acquisition methods systems engineering 17. SECURITY CLASSIFICATION ji. SECURITY... Management , Inc., Santa Monica, California. CORYNEN, G. C., 1975, A Mathematical Theory of Modeling and Simula- tion. Ph.D. Dissertation, Department

  6. Proposal for a National Serials Data System.

    ERIC Educational Resources Information Center

    Adams, Scott

    A hypothetical model is given for a National Serials Data System based on the best educated guesses of what the system should do and how, therefore, it should function. The model focuses attention on the ultimate goal rather than on the decision-making processes relating to choice of data elements, unique identification codes, etc. This conceptual…

  7. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  8. An OpenMI Implementation of a Water Resources System using Simple Script Wrappers

    NASA Astrophysics Data System (ADS)

    Steward, D. R.; Aistrup, J. A.; Kulcsar, L.; Peterson, J. M.; Welch, S. M.; Andresen, D.; Bernard, E. A.; Staggenborg, S. A.; Bulatewicz, T.

    2013-12-01

    This team has developed an adaption of the Open Modelling Interface (OpenMI) that utilizes Simple Script Wrappers. Code is made OpenMI compliant through organization within three modules that initialize, perform time steps, and finalize results. A configuration file is prepared that specifies variables a model expects to receive as input and those it will make available as output. An example is presented for groundwater, economic, and agricultural production models in the High Plains Aquifer region of Kansas. Our models use the programming environments in Scilab and Matlab, along with legacy Fortran code, and our Simple Script Wrappers can also use Python. These models are collectively run within this interdisciplinary framework from initial conditions into the future. It will be shown that by applying model constraints to one model, the impact may be accessed on changes to the water resources system.

  9. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  10. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  11. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    NASA Astrophysics Data System (ADS)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  12. Initial Kernel Timing Using a Simple PIM Performance Model

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David

    2005-01-01

    This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.

  13. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on PENELOPE and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: PENELOPE was first translated from FORTRAN to C++ and the result was confirmed to produce equivalent results to the original code. The C++ code was then adapted to CUDA in a workflow optimized for GPU architecture. The original code was expandedmore » to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gPENELOPE highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gPENELOPE as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gPENELOPE. Ultimately, gPENELOPE was applied toward independent validation of patient doses calculated by MRIdian’s KMC. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread FORTRAN implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gPENELOPE with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of PENELOPE. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.« less

  14. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model

    PubMed Central

    Wang, Yuhe; Mazur, Thomas R.; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H. Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H. Harold

    2016-01-01

    Purpose: The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. Methods: penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian’s kmc. Results: An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). Conclusions: A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems. PMID:27370123

  15. A GPU-accelerated Monte Carlo dose calculation platform and its application toward validating an MRI-guided radiation therapy beam model.

    PubMed

    Wang, Yuhe; Mazur, Thomas R; Green, Olga; Hu, Yanle; Li, Hua; Rodriguez, Vivian; Wooten, H Omar; Yang, Deshan; Zhao, Tianyu; Mutic, Sasa; Li, H Harold

    2016-07-01

    The clinical commissioning of IMRT subject to a magnetic field is challenging. The purpose of this work is to develop a GPU-accelerated Monte Carlo dose calculation platform based on penelope and then use the platform to validate a vendor-provided MRIdian head model toward quality assurance of clinical IMRT treatment plans subject to a 0.35 T magnetic field. penelope was first translated from fortran to c++ and the result was confirmed to produce equivalent results to the original code. The c++ code was then adapted to cuda in a workflow optimized for GPU architecture. The original code was expanded to include voxelized transport with Woodcock tracking, faster electron/positron propagation in a magnetic field, and several features that make gpenelope highly user-friendly. Moreover, the vendor-provided MRIdian head model was incorporated into the code in an effort to apply gpenelope as both an accurate and rapid dose validation system. A set of experimental measurements were performed on the MRIdian system to examine the accuracy of both the head model and gpenelope. Ultimately, gpenelope was applied toward independent validation of patient doses calculated by MRIdian's kmc. An acceleration factor of 152 was achieved in comparison to the original single-thread fortran implementation with the original accuracy being preserved. For 16 treatment plans including stomach (4), lung (2), liver (3), adrenal gland (2), pancreas (2), spleen(1), mediastinum (1), and breast (1), the MRIdian dose calculation engine agrees with gpenelope with a mean gamma passing rate of 99.1% ± 0.6% (2%/2 mm). A Monte Carlo simulation platform was developed based on a GPU- accelerated version of penelope. This platform was used to validate that both the vendor-provided head model and fast Monte Carlo engine used by the MRIdian system are accurate in modeling radiation transport in a patient using 2%/2 mm gamma criteria. Future applications of this platform will include dose validation and accumulation, IMRT optimization, and dosimetry system modeling for next generation MR-IGRT systems.

  16. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  17. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  18. 76 FR 18895 - Hexythiazox; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Zone Model/Exposure Analysis Modeling System (PRZM/EXAMS), the estimated drinking water concentration... Classification System (NAICS) codes have been provided to assist you and others in determining whether this.... Based upon review of the data supporting the petition, EPA has revised the proposed tolerance levels for...

  19. 78 FR 25396 - Glyphosate; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-01

    .../water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System (PRZM/EXAMS.... The following list of North American Industrial Classification System (NAICS) codes is not intended to... the data supporting the petition, EPA has modified the levels at which tolerances are being...

  20. 77 FR 10962 - Flazasulfuron; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    .../water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System (PRZM/EXAMS... Classification System (NAICS) codes have been provided to assist you and others in determining whether this... reliable information.'' This includes exposure through drinking water and in residential settings, but does...

  1. Software Considerations for Subscale Flight Testing of Experimental Control Laws

    NASA Technical Reports Server (NTRS)

    Murch, Austin M.; Cox, David E.; Cunningham, Kevin

    2009-01-01

    The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.

  2. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  3. acme: The Amendable Coal-Fire Modeling Exercise. A C++ Class Library for the Numerical Simulation of Coal-Fires

    NASA Astrophysics Data System (ADS)

    Wuttke, Manfred W.

    2017-04-01

    At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in Xinjiang, China, as well as analyze and interpret results from lab experiments.

  4. A software engineering perspective on environmental modeling framework design: The object modeling system

    USDA-ARS?s Scientific Manuscript database

    The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...

  5. Remote coding scheme based on waveguide Bragg grating in PLC splitter chip for PON monitoring.

    PubMed

    Zhang, Xuan; Lu, Fengjun; Chen, Si; Zhao, Xingqun; Zhu, Min; Sun, Xiaohan

    2016-03-07

    A distributing arranged waveguide Bragg gratings (WBGs) in PLC splitter chip based remote coding scheme is proposed and analyzed for passive optical network (PON) monitoring, by which the management system can identify each drop fiber link through the same reflector in the terminal of each optical network unit, even though there exist several equidistant users. The corresponding coding and capacity models are respectively established and investigated so that we can obtain a minimum number of the WBGs needed under the condition of the distributed structure. Signal-to-noise ratio (SNR) model related to the number of equidistant users is also developed to extend the analyses for the overall performance of the system. Simulation results show the proposed scheme is feasible and allow the monitoring of a 64 users PON with SNR range of 7.5~10.6dB. The scheme can solve some of difficulties of construction site at the lower user cost for PON system.

  6. A three-dimensional turbulent compressible flow model for ejector and fluted mixers

    NASA Technical Reports Server (NTRS)

    Rushmore, W. L.; Zelazny, S. W.

    1978-01-01

    A three dimensional finite element computer code was developed to analyze ejector and axisymmetric fluted mixer systems whose flow fields are not significantly influenced by streamwise diffusion effects. A two equation turbulence model was used to make comparisons between theory and data for various flow fields which are components of the ejector system, i.e., (1) turbulent boundary layer in a duct; (2) rectangular nozzle (free jet); (3) axisymmetric nozzle (free jet); (4) hypermixing nozzle (free jet); and (5) plane wall jet. Likewise, comparisons of the code with analytical results and/or other numerical solutions were made for components of the axisymmetric fluted mixer system. These included: (1) developing pipe flow; (2) developing flow in an annular pipe; (3) developing flow in an axisymmetric pipe with conical center body and no fluting and (4) developing fluted pipe flow. Finally, two demonstration cases are presented which show the code's ability to analyze both the ejector and axisymmetric fluted mixers.

  7. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  8. A Steady State and Quasi-Steady Interface Between the Generalized Fluid System Simulation Program and the SINDA/G Thermal Analysis Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Majumdar, Alok; Tiller, Bruce

    2001-01-01

    A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasisteady (unsteady solid, steady fluid) conjugate heat transfer modeling.

  9. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  10. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  11. Computational models for the analysis/design of hypersonic scramjet components. I - Combustor and nozzle models

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.

    1986-01-01

    An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.

  12. Active magnetic bearing control loop modeling for a finite element rotordynamics code

    NASA Technical Reports Server (NTRS)

    Genta, Giancarlo; Delprete, Cristiana; Carabelli, Stefano

    1994-01-01

    A mathematical model of an active electromagnetic bearing which includes the actuator, the sensor and the control system is developed and implemented in a specialized finite element code for rotordynamic analysis. The element formulation and its incorporation in the model of the machine are described in detail. A solution procedure, based on a modal approach in which the number of retained modes is controlled by the user, is then shown together with other procedures for computing the steady-state response to both static and unbalance forces. An example of application shows the numerical results obtained on a model of an electric motor suspended on a five active-axis magnetic suspension. The comparison of some of these results with the experimental characteristics of the actual system shows the ability of the present model to predict its performance.

  13. Extensions and Adjuncts to the BRL-COMGEOM Program

    DTIC Science & Technology

    1974-08-01

    m MAGIC Code, GIFT Code, Computer Simulation, Target Description, Geometric Modeling Techniques, Vulnerability Analysis 20...Arbitrary Quadric Surf ace.. 0Oo „<>. 7 III. BRITL: A GEOMETRY PREPROCESSOR PROGRAM FOR INPUT TO THE GIFT SYSTEM „ 0 18 A. Introduction <, „. ° 18 B...the BRL- GIFT code. The tasks completed under this contract and described in the report are: Ao The addition to the list of available body types

  14. SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems

    NASA Astrophysics Data System (ADS)

    Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.

    2018-03-01

    We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.

  15. Customer-Driven Reliability Models for Multistate Coherent Systems

    DTIC Science & Technology

    1992-01-01

    AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE

  16. Do Over or Make Do? Climate Models as a Software Development Challenge (Invited)

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.

    2010-12-01

    We present the results of a comparative study of the software engineering culture and practices at four different earth system modeling centers: the UK Met Office Hadley Centre, the National Center for Atmospheric Research (NCAR), The Max-Planck-Institut für Meteorologie (MPI-M), and the Institut Pierre Simon Laplace (IPSL). The study investigated the software tools and techniques used at each center to assess their effectiveness. We also investigated how differences in the organizational structures, collaborative relationships, and technical infrastructures constrain the software development and affect software quality. Specific questions for the study included 1) Verification and Validation - What techniques are used to ensure that the code matches the scientists’ understanding of what it should do? How effective are these are at eliminating errors of correctness and errors of understanding? 2) Coordination - How are the contributions from across the modeling community coordinated? For coupled models, how are the differences in the priorities of different, overlapping communities of users addressed? 3) Division of responsibility - How are the responsibilities for coding, verification, and coordination distributed between different roles (scientific, engineering, support) in the organization? 4) Planning and release processes - How do modelers decide on priorities for model development, how do they decide which changes to tackle in a particular release of the model? 5) Debugging - How do scientists debug the models, what types of bugs do they find in their code, and how they find them? The results show that each center has evolved a set of model development practices that are tailored to their needs and organizational constraints. These practices emphasize scientific validity, but tend to neglect other software qualities, and all the centers struggle frequently with software problems. The testing processes are effective at removing software errors prior to release, but the code is hard to understand and hard to change. Software errors and model configuration problems are common during model development, and appear to have a serious impact on scientific productivity. These problems have grown dramatically in recent years with the growth in size and complexity of earth system models. Much of the success in obtaining valid simulations from the models depends on the scientists developing their own code, experimenting with alternatives, running frequent full system tests, and exploring patterns in the results. Blind application of generic software engineering processes is unlikely to work well. Instead, each center needs to lean how to balance the need for better coordination through a more disciplined approach with the freedom to explore, and the value of having scientists work directly with the code. This suggests that each center can learn a lot from comparing their practices with others, but that each might need to develop a different set of best practices.

  17. The feasibility of paper-based Tracking Codes and electronic medical record systems to monitor tobacco-use assessment and intervention in an Individual Practice Association (IPA) Model health maintenance organization (HMO).

    PubMed

    Bentz, Charles J; Davis, Nancy; Bayley, Bruce

    2002-01-01

    Despite evidence of its effectiveness, tobacco cessation is not systematically addressed in routine healthcare settings. Its measurement is part of the problem. A pilot study was designed to develop and implement two different tobacco tracking systems in two independent primary care offices that participated in an IPA Model health maintenance organization in Portland, Oregon. The first clinic, which utilized a paper-based charting system, implemented CPT-like tracking codes to measure and report tobacco-cessation activities, which were eventually included in the managed-care organization's (MCO) claims database. The second clinic implemented an electronic tracking system based on its computerized electronic medical record (EMR) charting system. This paper describes the pilot study, including the processes involved in building provider acceptance for the new tracking systems in these two clinics, the barriers and successes encountered during implementation, and the resources expended by the clinics and by the MCO during the pilot. The findings from the 3-month implementation period were that documentation of tobacco-use status remained stable at 42-45% in the paper-based clinic and increased from 79% to 88% in the EMR clinic. This pilot study demonstrated that Tracking Codes are a feasible preventive-care tracking system in paper-based medical offices. However, high levels of effort and support are needed, and a critical mass of insurers and health plans would need to adopt Tracking Codes before widespread use could be expected. Results of the EMR-based tracking system are also reviewed and discussed.

  18. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE PAGES

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  19. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  20. Tipjet 80-inch Model Rotor Hover Test: Test No. 1198

    DTIC Science & Technology

    1993-09-01

    primarily working papers Intended for internal use. They I carry an Identifying number which indicates their type and the numerical code of the oriwginatn...rotor lifting system, while exhbitin the highes aumntbnratio eve recorded for a CC rotor, sufer an induced power penalty due to the norilifting regon...INFORMATION This work was conducted by the Ship Systems and Programs DirectorateI (Code 22) of the Carderock Division, Naval Surface Warfare Center

  1. A Note on NCOM Temperature Forecast Error Calibration Using the Ensemble Transform

    DTIC Science & Technology

    2009-01-01

    Division Head Ruth H. Preller, 7300 Security, Code 1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs...problem, local unbiased (correlation) and persistent errors (bias) of the Navy Coastal Ocean Modeling (NCOM) System nested in global ocean domains, are...system were made available in real-time without performing local data assimilation, though remote sensing and global data was assimilated on the

  2. Integrated Earth System Model (iESM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Peter Edmond; Mao, Jiafu; Shi, Xiaoying

    2016-12-02

    The iESM is a simulation code that represents the physical and biological aspects of Earth's climate system, and also includes the macro-economic and demographic properties of human societies. The human aspect of the simulation code is focused in particular on the effects of human activities on land use and land cover change, but also includes aspects such as energy economies. The time frame for predictions with iESM is approximately 1970 through 2100.

  3. Neural representation of objects in space: a dual coding account.

    PubMed Central

    Humphreys, G W

    1998-01-01

    I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227

  4. A Computational Method for Determining the Equilibrium Composition and Product Temperature in a LH2/LOX Combustor

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet

    2003-01-01

    In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.

  5. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  6. Proceedings of the 21st DOE/NRC Nuclear Air Cleaning Conference; Sessions 1--8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    First, M.W.

    1991-02-01

    Separate abstracts have been prepared for the papers presented at the meeting on nuclear facility air cleaning technology in the following specific areas of interest: air cleaning technologies for the management and disposal of radioactive wastes; Canadian waste management program; radiological health effects models for nuclear power plant accident consequence analysis; filter testing; US standard codes on nuclear air and gas treatment; European community nuclear codes and standards; chemical processing off-gas cleaning; incineration and vitrification; adsorbents; nuclear codes and standards; mathematical modeling techniques; filter technology; safety; containment system venting; and nuclear air cleaning programs around the world. (MB)

  7. File compression and encryption based on LLS and arithmetic coding

    NASA Astrophysics Data System (ADS)

    Yu, Changzhi; Li, Hengjian; Wang, Xiyu

    2018-03-01

    e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.

  8. TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.

    PubMed

    Kurosawa, Masahiko

    2005-01-01

    For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.

  9. Establishment and assessment of code scaling capability

    NASA Astrophysics Data System (ADS)

    Lim, Jaehyok

    In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.

  10. 76 FR 18899 - Indaziflam; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Model/Exposure Analysis Modeling System (PRZM/EXAMS) and Screening Concentration in Ground Water (SCI... Classification System (NAICS) codes have been provided to assist you and others in determining whether this... response to the notice of filing. Based upon review of the data supporting the petitions, EPA has modified...

  11. 75 FR 70143 - Acequinocyl; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-17

    .../water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System (PRZM/EXAMS... be affected. The North American Industrial Classification System (NAICS) codes have been provided to... the data supporting the petition, EPA has revised the proposed tolerance for hop dried cones from 3.5...

  12. 78 FR 29041 - Sulfoxaflor; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-17

    ... Modeling System (PRZM/EXAMS) and Screening Concentration in Ground Water (SCI-GROW) models, the estimated.... The following list of North American Industrial Classification System (NAICS) codes is not intended to... response to these comments is discussed in Unit IV.C. Based upon review of the data supporting the petition...

  13. 77 FR 4248 - Cyazofamid; Pesticide Tolerances for Emergency Exemptions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-27

    .../water/index.htm . Based on the Pesticide Root Zone Model/Exposure Analysis Modeling System (PRZM/EXAMS... Classification System (NAICS) codes have been provided to assist you and others in determining whether this... reliable information.'' This includes exposure through drinking water and in residential settings, but does...

  14. System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng Lin; Dong Hou; Zhihong Xu

    2006-07-01

    Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, justmore » can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is modeled by RELAP5 code, and its main control and protection system is duplicated by Matlab/Simulink. Some steady states and transients are calculated under control of these I and C systems, and the results are compared with the plant test curves. The application showed that it can do exact system simulation of NPPs by coupling RELAP5 and Matlab/Simulink. This paper will mainly focus on the coupling method, plant thermal-hydraulic model, main control logics, test and application results. (authors)« less

  15. Terminal Area Simulation System User's Guide - Version 10.0

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.

    2014-01-01

    The Terminal Area Simulation System (TASS) is a three-dimensional, time-dependent, large eddy simulation model that has been developed for studies of wake vortex and weather hazards to aviation, along with other atmospheric turbulence, and cloud-scale weather phenomenology. This document describes the source code for TASS version 10.0 and provides users with needed documentation to run the model. The source code is programed in Fortran language and is formulated to take advantage of vector and efficient multi-processor scaling for execution on massively-parallel supercomputer clusters. The code contains different initialization modules allowing the study of aircraft wake vortex interaction with the atmosphere and ground, atmospheric turbulence, atmospheric boundary layers, precipitating convective clouds, hail storms, gust fronts, microburst windshear, supercell and mesoscale convective systems, tornadic storms, and ring vortices. The model is able to operate in either two- or three-dimensions with equations numerically formulated on a Cartesian grid. The primary output from the TASS is time-dependent domain fields generated by the prognostic equations and diagnosed variables. This document will enable a user to understand the general logic of TASS, and will show how to configure and initialize the model domain. Also described are the formats of the input and output files, as well as the parameters that control the input and output.

  16. Uses of Computer Simulation Models in Ag-Research and Everyday Life

    USDA-ARS?s Scientific Manuscript database

    When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...

  17. Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework

    NASA Astrophysics Data System (ADS)

    Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac

    2016-10-01

    Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.

  18. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  19. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  20. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  1. Computational Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years of the project. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed. A theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modelling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide a embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  2. Automatic programming of simulation models

    NASA Technical Reports Server (NTRS)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1990-01-01

    The concepts of software engineering were used to improve the simulation modeling environment. Emphasis was placed on the application of an element of rapid prototyping, or automatic programming, to assist the modeler define the problem specification. Then, once the problem specification has been defined, an automatic code generator is used to write the simulation code. The following two domains were selected for evaluating the concepts of software engineering for discrete event simulation: manufacturing domain and a spacecraft countdown network sequence. The specific tasks were to: (1) define the software requirements for a graphical user interface to the Automatic Manufacturing Programming System (AMPS) system; (2) develop a graphical user interface for AMPS; and (3) compare the AMPS graphical interface with the AMPS interactive user interface.

  3. Using SPARK as a Solver for Modelica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Wetter, Michael; Haves, Philip

    Modelica is an object-oriented acausal modeling language that is well positioned to become a de-facto standard for expressing models of complex physical systems. To simulate a model expressed in Modelica, it needs to be translated into executable code. For generating run-time efficient code, such a translation needs to employ algebraic formula manipulations. As the SPARK solver has been shown to be competitive for generating such code but currently cannot be used with the Modelica language, we report in this paper how SPARK's symbolic and numerical algorithms can be implemented in OpenModelica, an open-source implementation of a Modelica modeling and simulationmore » environment. We also report benchmark results that show that for our air flow network simulation benchmark, the SPARK solver is competitive with Dymola, which is believed to provide the best solver for Modelica.« less

  4. Integrating Geochemical Reactions with a Particle-Tracking Approach to Simulate Nitrogen Transport and Transformation in Aquifers

    NASA Astrophysics Data System (ADS)

    Cui, Z.; Welty, C.; Maxwell, R. M.

    2011-12-01

    Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.

  5. A new nuclide transport model in soil in the GENII-LIN health physics code

    NASA Astrophysics Data System (ADS)

    Teodori, F.

    2017-11-01

    The nuclide soil transfer model, originally included in the GENII-LIN software system, was intended for residual contamination from long term activities and from waste form degradation. Short life nuclides were supposed absent or at equilibrium with long life parents. Here we present an enhanced soil transport model, where short life nuclide contributions are correctly accounted. This improvement extends the code capabilities to handle incidental release of contaminant to soil, by evaluating exposure since the very beginning of the contamination event, before the radioactive decay chain equilibrium is reached.

  6. A High-Level Language for Modeling Algorithms and Their Properties

    NASA Astrophysics Data System (ADS)

    Akhtar, Sabina; Merz, Stephan; Quinson, Martin

    Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.

  7. 49 CFR 579.21 - Reporting requirements for manufacturers of 5,000 or more light vehicles annually.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., 05 parking brake, 06 engine and engine cooling system, 07 fuel system, 10 power train, 11 electrical... model, the model year, the type, the platform, the fuel and/or propulsion system type coded as follows: CNG (compressed natural gas), CIF (compression ignition fuel), EBP (electric battery power), FCP (fuel...

  8. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems

    NASA Astrophysics Data System (ADS)

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J. T.; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J.; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A. L.; Rubio, Angel

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schr\\"odinger equation for low-dimensionality systems.

  9. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  10. Self-consistent modeling of electron cyclotron resonance ion sources

    NASA Astrophysics Data System (ADS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  11. Polarized skylight navigation in insects: model and electrophysiology of e-vector coding by neurons in the central complex.

    PubMed

    Sakura, Midori; Lambrinos, Dimitrios; Labhart, Thomas

    2008-02-01

    Many insects exploit skylight polarization for visual compass orientation or course control. As found in crickets, the peripheral visual system (optic lobe) contains three types of polarization-sensitive neurons (POL neurons), which are tuned to different ( approximately 60 degrees diverging) e-vector orientations. Thus each e-vector orientation elicits a specific combination of activities among the POL neurons coding any e-vector orientation by just three neural signals. In this study, we hypothesize that in the presumed orientation center of the brain (central complex) e-vector orientation is population-coded by a set of "compass neurons." Using computer modeling, we present a neural network model transforming the signal triplet provided by the POL neurons to compass neuron activities coding e-vector orientation by a population code. Using intracellular electrophysiology and cell marking, we present evidence that neurons with the response profile of the presumed compass neurons do indeed exist in the insect brain: each of these compass neuron-like (CNL) cells is activated by a specific e-vector orientation only and otherwise remains silent. Morphologically, CNL cells are tangential neurons extending from the lateral accessory lobe to the lower division of the central body. Surpassing the modeled compass neurons in performance, CNL cells are insensitive to the degree of polarization of the stimulus between 99% and at least down to 18% polarization and thus largely disregard variations of skylight polarization due to changing solar elevations or atmospheric conditions. This suggests that the polarization vision system includes a gain control circuit keeping the output activity at a constant level.

  12. Design Aspects of the Rayleigh Convection Code

    NASA Astrophysics Data System (ADS)

    Featherstone, N. A.

    2017-12-01

    Understanding the long-term generation of planetary or stellar magnetic field requires complementary knowledge of the large-scale fluid dynamics pervading large fractions of the object's interior. Such large-scale motions are sensitive to the system's geometry which, in planets and stars, is spherical to a good approximation. As a result, computational models designed to study such systems often solve the MHD equations in spherical geometry, frequently employing a spectral approach involving spherical harmonics. We present computational and user-interface design aspects of one such modeling tool, the Rayleigh convection code, which is suitable for deployment on desktop and petascale-hpc architectures alike. In this poster, we will present an overview of this code's parallel design and its built-in diagnostics-output package. Rayleigh has been developed with NSF support through the Computational Infrastructure for Geodynamics and is expected to be released as open-source software in winter 2017/2018.

  13. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    NASA Astrophysics Data System (ADS)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer project between Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research (INER) for the preliminary assessment of several candidate low-level waste repository sites. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  14. Feasibility of a computer-assisted feedback system between dispatch centre and ambulances.

    PubMed

    Lindström, Veronica; Karlsten, Rolf; Falk, Ann-Charlotte; Castrèn, Maaret

    2011-06-01

    The aim of the study was to evaluate the feasibility of a newly developed computer-assisted feedback system between dispatch centre and ambulances in Stockholm, Sweden. A computer-assisted feedback system based on a Finnish model was designed to fit the Swedish emergency medical system. Feedback codes were identified and divided into three categories; assessment of patients' primary condition when ambulance arrives at scene, no transport by the ambulance and level of priority. Two ambulances and one emergency medical communication centre (EMCC) in Stockholm participated in the study. A sample of 530 feedback codes sent through the computer-assisted feedback system was reviewed. The information on the ambulance medical records was compared with the feedback codes used and 240 assignments were further analyzed. The used feedback codes sent from ambulance to EMCC were correct in 92% of the assignments. The most commonly used feedback code sent to the emergency medical dispatchers was 'agree with the dispatchers' assessment'. In addition, in 160 assignments there was a mismatch between emergency medical dispatchers and ambulance nurse assessments. Our results have shown a high agreement between medical dispatchers and ambulance nurse assessment. The feasibility of the feedback codes seems to be acceptable based on the small margin of error. The computer-assisted feedback system may, when used on a daily basis, make it possible for the medical dispatchers to receive feedback in a structural way. The EMCC organization can directly evaluate any changes in the assessment protocol by structured feedback sent from the ambulance.

  15. User's guide to the SEPHIS computer code for calculating the Thorex solvent extraction system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, S.B.; Rainey, R.H.

    1979-05-01

    The SEPHIS computer program was developed to simulate the countercurrent solvent extraction process. The code has now been adapted to model the Acid Thorex flow sheet. This report represents a practical user's guide to SEPHIS - Thorex containing a program description, user information, program listing, and sample input and output.

  16. Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code

    NASA Astrophysics Data System (ADS)

    Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2016-06-01

    We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.

  17. Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.

    2006-07-01

    The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less

  18. The EUCLID/V1 Integrated Code for Safety Assessment of Liquid Metal Cooled Fast Reactors. Part 1: Basic Models

    NASA Astrophysics Data System (ADS)

    Mosunova, N. A.

    2018-05-01

    The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.

  19. Introduction to the internal fluid mechanics research session

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Povinelli, Louis A.

    1990-01-01

    Internal fluid mechanics research at LeRC is directed toward an improved understanding of the important flow physics affecting aerospace propulsion systems, and applying this improved understanding to formulate accurate predictive codes. To this end, research is conducted involving detailed experimentation and analysis. The following three papers summarize ongoing work and indicate future emphasis in three major research thrusts: inlets, ducts, and nozzles; turbomachinery; and chemical reacting flows. The underlying goal of the research in each of these areas is to bring internal computational fluid mechanic to a state of practical application for aerospace propulsion systems. Achievement of this goal requires that carefully planned and executed experiments be conducted in order to develop and validate useful codes. It is critical that numerical code development work and experimental work be closely coupled. The insights gained are represented by mathematical models that form the basis for code development. The resultant codes are then tested by comparing them with appropriate experiments in order to ensure their validity and determine their applicable range. The ultimate user community must be a part of this process to assure relevancy of the work and to hasten its practical application. Propulsion systems are characterized by highly complex and dynamic internal flows. Many complex, 3-D flow phenomena may be present, including unsteadiness, shocks, and chemical reactions. By focusing on specific portions of a propulsion system, it is often possible to identify the dominant phenomena that must be understood and modeled for obtaining accurate predictive capability. The three major research thrusts serve as a focus leading to greater understanding of the relevant physics and to an improvement in analytic tools. This in turn will hasten continued advancements in propulsion system performance and capability.

  20. Definition of the Floating System for Phase IV of OC3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonkman, J.

    Phase IV of the IEA Annex XXIII Offshore Code Comparison Collaboration (OC3) involves the modeling of an offshore floating wind turbine. This report documents the specifications of the floating system, which are needed by the OC3 participants for building aero-hydro-servo-elastic models.

  1. 78 FR 33731 - Propamocarb; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... Modeling System (PRZM/EXAMS) and Screening Concentration in Ground Water (SCI- GROW) models, the estimated... Classification System (NAICS) codes is not intended to be exhaustive, but rather provides a guide to help readers... to the notice of filing. Based upon review of the data supporting the petition, EPA has revised the...

  2. System for the Analysis of Global Energy Markets - Vol. II, Model Documentation

    EIA Publications

    2003-01-01

    The second volume provides a data implementation guide that lists all naming conventions and model constraints. In addition, Volume 1 has two appendixes that provide a schematic of the System for the Analysis of Global Energy Markets (SAGE) structure and a listing of the source code, respectively.

  3. Calculated criticality for sup 235 U/graphite systems using the VIM Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, P.J.; Grasseschi, G.L.; Olsen, D.N.

    1992-01-01

    Calculations for highly enriched uranium and graphite systems gained renewed interest recently for the new production modular high-temperature gas-cooled reactor (MHTGR). Experiments to validate the physics calculations for these systems are being prepared for the Transient Reactor Test Facility (TREAT) reactor at Argonne National Laboratory (ANL-West) and in the Compact Nuclear Power Source facility at Los Alamos National Laboratory. The continuous-energy Monte Carlo code VIM, or equivalently the MCNP code, can utilize fully detailed models of the MHTGR and serve as benchmarks for the approximate multigroup methods necessary in full reactor calculations. Validation of these codes and their associated nuclearmore » data did not exist for highly enriched {sup 235}U/graphite systems. Experimental data, used in development of more approximate methods, dates back to the 1960s. The authors have selected two independent sets of experiments for calculation with the VIM code. The carbon-to-uranium (C/U) ratios encompass the range of 2,000, representative of the new production MHTGR, to the ratio of 10,000 in the fuel of TREAT. Calculations used the ENDF/B-V data.« less

  4. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  5. Modeling moisture content of fine dead wildland fuels: Input to the BEHAVE fire prediction system

    Treesearch

    Richard C. Rothermel; Ralph A. Wilson; Glen A. Morris; Stephen S. Sackett

    1986-01-01

    Describes a model for predicting moisture content of fine fuels for use with the BEHAVE fire behavior and fuel modeling system. The model is intended to meet the need for more accurate predictions of fine fuel moisture, particularly in northern conifer stands and on days following rain. The model is based on the Canadian Fine Fuel Moisture Code (FFMC), modified to...

  6. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurosu, K; Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka; Takashina, M

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximummore » step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health, Labor and Welfare of Japan, Grants-in-Aid for Scientific Research (No. 23791419), and JSPS Core-to-Core program (No. 23003). The authors have no conflict of interest.« less

  7. Modeling And Simulation Of Bar Code Scanners Using Computer Aided Design Software

    NASA Astrophysics Data System (ADS)

    Hellekson, Ron; Campbell, Scott

    1988-06-01

    Many optical systems have demanding requirements to package the system in a small 3 dimensional space. The use of computer graphic tools can be a tremendous aid to the designer in analyzing the optical problems created by smaller and less costly systems. The Spectra Physics grocery store bar code scanner employs an especially complex 3 dimensional scan pattern to read bar code labels. By using a specially written program which interfaces with a computer aided design system, we have simulated many of the functions of this complex optical system. In this paper we will illustrate how a recent version of the scanner has been designed. We will discuss the use of computer graphics in the design process including interactive tweaking of the scan pattern, analysis of collected light, analysis of the scan pattern density, and analysis of the manufacturing tolerances used to build the scanner.

  8. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  9. Model verification of large structural systems

    NASA Technical Reports Server (NTRS)

    Lee, L. T.; Hasselman, T. K.

    1977-01-01

    A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.

  10. Development of Automated Procedures to Generate Reference Building Models for ASHRAE Standard 90.1 and India’s Building Energy Code and Implementation in OpenStudio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Andrew; Haves, Philip; Jegi, Subhash

    This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.

  11. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems

    NASA Astrophysics Data System (ADS)

    Portegies Zwart, Simon; McMillan, Steve; O'Nualláin, Breanndán; Heggie, Douglas; Lombardi, James; Hut, Piet; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Fuji, Michiko; Gaburov, Evghenii; Glebbeek, Evert; Groen, Derek; Harfst, Stefan; Izzard, Rob; Jurić, Mario; Justham, Stephen; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel

    We present MUSE, a software framework for tying together existing computational tools for different astrophysical domains into a single multiphysics, multiscale workload. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for a generalized stellar systems workload. MUSE has now reached a "Noah's Ark" milestone, with two available numerical solvers for each domain. MUSE can treat small stellar associations, galaxies and everything in between, including planetary systems, dense stellar clusters and galactic nuclei. Here we demonstrate an examples calculated with MUSE: the merger of two galaxies. In addition we demonstrate the working of MUSE on a distributed computer. The current MUSE code base is publicly available as open source at http://muse.li.

  12. Pulse Vector-Excitation Speech Encoder

    NASA Technical Reports Server (NTRS)

    Davidson, Grant; Gersho, Allen

    1989-01-01

    Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.

  13. Construction and Testing of an 80C86 Based Communications Controller for the Petite Amateur Navy Satellite (PANSAT)

    DTIC Science & Technology

    1990-12-01

    S) Naval Postgraduate School 6a. NAME OF PERFORMING ORGANIZATION 6b OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION (if applicable ) Code 33 6c...FUNDING/SPONSORING Bb. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable ) 8c. ADDRESS (City, State, and ZIP Code...system’s individual components. Then one derives the overall system reliability from that information, using a simple mathematical model, to be

  14. Examination of Airborne FDEM System Attributes for UXO Mapping and Detection

    DTIC Science & Technology

    2009-11-01

    quadrature output should only occur when there is a distortion in the transmitter waveform signal that correlates with the quadrature part of the...suggested that the S/N performance of the quadrature output of the two FDEM designs would be similar to the observed S/N of TEM systems, though...the semi-airborne configuration. We propose to extend the current SAIC codes to address this need, and to perform additional modeling using codes

  15. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  16. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence systems or Legendre-Fourier polynomials for grazing incidence systems. It removes low order terms from the metrology data, calculates statistical ACV or PSD functions, and fits these data to OSAC models for the scatter analysis. In this paper we briefly describe the laboratory image testing of FUSE spare mirror performed in the near and vacuum ultraviolet at John Hopkins University and OSAC modeling of the test setup performed at NASA/GSFC. The test setup is a double-pass configuration consisting of a Hg discharge source, the FUSE off-axis parabolic mirror under test, an autocollimating flat mirror, and a tomographic imaging detector. Two additional, small fold flats are used in the optical train to accommodate the light source and the detector. The modeling is based on Zernike fitting and PSD analysis of surface metrology data measured by both the mirror vendor (Tinsley) and JHU. The results of our models agree well with the laboratory imaging data, thus validating our theoretical model. Finally, we predict the imaging performance of FUSE mirrors in their flight configuration at far-ultraviolet wavelengths.

  17. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less

  18. Modelling the performance of the monogroove with screen heat pipe for use in the radiator of the solar dynamic power system of the NASA Space Station

    NASA Technical Reports Server (NTRS)

    Evans, Austin Lewis

    1987-01-01

    A computer code to model the steady-state performance of a monogroove heat pipe for the NASA Space Station is presented, including the effects on heat pipe performance of a screen in the evaporator section which deals with transient surges in the heat input. Errors in a previous code have been corrected, and the new code adds additional loss terms in order to model several different working fluids. Good agreement with existing performance curves is obtained. From a preliminary evaluation of several of the radiator design parameters it is found that an optimum fin width could be achieved but that structural considerations limit the thickness of the fin to a value above optimum.

  19. Vaccine Hesitancy in Discussion Forums: Computer-Assisted Argument Mining with Topic Models.

    PubMed

    Skeppstedt, Maria; Kerren, Andreas; Stede, Manfred

    2018-01-01

    Arguments used when vaccination is debated on Internet discussion forums might give us valuable insights into reasons behind vaccine hesitancy. In this study, we applied automatic topic modelling on a collection of 943 discussion posts in which vaccine was debated, and six distinct discussion topics were detected by the algorithm. When manually coding the posts ranked as most typical for these six topics, a set of semantically coherent arguments were identified for each extracted topic. This indicates that topic modelling is a useful method for automatically identifying vaccine-related discussion topics and for identifying debate posts where these topics are discussed. This functionality could facilitate manual coding of salient arguments, and thereby form an important component in a system for computer-assisted coding of vaccine-related discussions.

  20. A model that integrates eye velocity commands to keep track of smooth eye displacements.

    PubMed

    Blohm, Gunnar; Optican, Lance M; Lefèvre, Philippe

    2006-08-01

    Past results have reported conflicting findings on the oculomotor system's ability to keep track of smooth eye movements in darkness. Whereas some results indicate that saccades cannot compensate for smooth eye displacements, others report that memory-guided saccades during smooth pursuit are spatially correct. Recently, it was shown that the amount of time before the saccade made a difference: short-latency saccades were retinotopically coded, whereas long-latency saccades were spatially coded. Here, we propose a model of the saccadic system that can explain the available experimental data. The novel part of this model consists of a delayed integration of efferent smooth eye velocity commands. Two alternative physiologically realistic neural mechanisms for this integration stage are proposed. Model simulations accurately reproduced prior findings. Thus, this model reconciles the earlier contradictory reports from the literature about compensation for smooth eye movements before saccades because it involves a slow integration process.

  1. Stable isotope reactive transport modeling in water-rock interactions during CO2 injection

    NASA Astrophysics Data System (ADS)

    Hidalgo, Juan J.; Lagneau, Vincent; Agrinier, Pierre

    2010-05-01

    Stable isotopes can be of great usefulness in the characterization and monitoring of CO2 sequestration sites. Stable isotopes can be used to track the migration of the CO2 plume and identify leakage sources. Moreover, they provide unique information about the chemical reactions that take place on the CO2-water-rock system. However, there is a lack of appropriate tools that help modelers to incorporate stable isotope information into the flow and transport models used in CO2 sequestration problems. In this work, we present a numerical tool for modeling the transport of stable isotopes in groundwater reactive systems. The code is an extension of the groundwater single-phase flow and reactive transport code HYTEC [2]. HYTEC's transport module was modified to include element isotopes as separate species. This way, it is able to track isotope composition of the system by computing the mixing between the background water and the injected solution accounting for the dependency of diffusion on the isotope mass. The chemical module and database have been expanded to included isotopic exchange with minerals and the isotope fractionation associated with chemical reactions and mineral dissolution or precipitation. The performance of the code is illustrated through a series of column synthetic models. The code is also used to model the aqueous phase CO2 injection test carried out at the Lamont-Doherty Earth Observatory site (Palisades, New York, USA) [1]. References [1] N. Assayag, J. Matter, M. Ader, D. Goldberg, and P. Agrinier. Water-rock interactions during a CO2 injection field-test: Implications on host rock dissolution and alteration effects. Chemical Geology, 265(1-2):227-235, July 2009. [2] Jan van der Lee, Laurent De Windt, Vincent Lagneau, and Patrick Goblet. Module-oriented modeling of reactive transport with HYTEC. Computers & Geosciences, 29(3):265-275, April 2003.

  2. tran-SAS v1.0: a numerical model to compute catchment-scale hydrologic transport using StorAge Selection functions

    NASA Astrophysics Data System (ADS)

    Benettin, Paolo; Bertuzzo, Enrico

    2018-04-01

    This paper presents the tran-SAS package, which includes a set of codes to model solute transport and water residence times through a hydrological system. The model is based on a catchment-scale approach that aims at reproducing the integrated response of the system at one of its outlets. The codes are implemented in MATLAB and are meant to be easy to edit, so that users with minimal programming knowledge can adapt them to the desired application. The problem of large-scale solute transport has both theoretical and practical implications. On the one side, the ability to represent the ensemble of water flow trajectories through a heterogeneous system helps unraveling streamflow generation processes and allows us to make inferences on plant-water interactions. On the other side, transport models are a practical tool that can be used to estimate the persistence of solutes in the environment. The core of the package is based on the implementation of an age master equation (ME), which is solved using general StorAge Selection (SAS) functions. The age ME is first converted into a set of ordinary differential equations, each addressing the transport of an individual precipitation input through the catchment, and then it is discretized using an explicit numerical scheme. Results show that the implementation is efficient and allows the model to run in short times. The numerical accuracy is critically evaluated and it is shown to be satisfactory in most cases of hydrologic interest. Additionally, a higher-order implementation is provided within the package to evaluate and, if necessary, to improve the numerical accuracy of the results. The codes can be used to model streamflow age and solute concentration, but a number of additional outputs can be obtained by editing the codes to further advance the ability to understand and model catchment transport processes.

  3. User's manual for PANDA II: A computer code for calculating equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerley, G.I.

    1991-07-18

    PANDA is an interactive computer code that is used to compute equations of state (EOS) for many classes of materials over a wide range of densities and temperatures. The first step in the development of a general EOS model is to determine the EOS for a one- component system, consisting of a single solid or fluid phase and a single chemical species. The results of several such calculations can then be combined to construct EOS for multiphase and multicomponent systems. For one-component solids and fluids, PANDA offers a variety of options for modeling various contributions to the EOS: the zeromore » Kelvin isotherm, lattice vibrations, fluid degrees of freedom, thermal electronic excitation and ionization, and molecular vibrational and rotational degrees of freedom. Two options are available for computing EOS for multicomponent systems from separate EOS for the individual species and phases. The phase transition model is used for a system of immiscible phases, each having the same chemical composition. In the mixture model, the components can be either miscible or immiscible and can have different chemical compositions; mixtures cab be either inert or reactive. PANDA provides over 50 commands that are used to define the EOS models, to make calculations and compare the models to experimental data, and to generate and maintain tabular EOS libraries for use in hydrocodes and other applications. Versions of the code available for the Cray (UNICOS and CTSS), SUN (UNIX), and VAX(VMS) machines, and a small version is available for personal computers (DOS). This report describes the EOS models, use of the commands, and several sample problems. 92 refs., 7 figs., 10 tabs.« less

  4. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  5. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  7. Project summaries

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Lunar base projects, including a reconfigurable lunar cargo launcher, a thermal and micrometeorite protection system, a versatile lifting machine with robotic capabilities, a cargo transport system, the design of a road construction system for a lunar base, and the design of a device for removing lunar dust from material surfaces, are discussed. The emphasis on the Gulf of Mexico project was on the development of a computer simulation model for predicting vessel station keeping requirements. An existing code, used in predicting station keeping requirements for oil drilling platforms operating in North Shore (Alaska) waters was used as a basis for the computer simulation. Modifications were made to the existing code. The input into the model consists of satellite altimeter readings and water velocity readings from buoys stationed in the Gulf of Mexico. The satellite data consists of altimeter readings (wave height) taken during the spring of 1989. The simulation model predicts water velocity and direction, and wind velocity.

  8. Multi-agent modelling framework for water, energy and other resource networks

    NASA Astrophysics Data System (ADS)

    Knox, S.; Selby, P. D.; Meier, P.; Harou, J. J.; Yoon, J.; Lachaut, T.; Klassert, C. J. A.; Avisse, N.; Mohamed, K.; Tomlinson, J.; Khadem, M.; Tilmant, A.; Gorelick, S.

    2015-12-01

    Bespoke modelling tools are often needed when planning future engineered interventions in the context of various climate, socio-economic and geopolitical futures. Such tools can help improve system operating policies or assess infrastructure upgrades and their risks. A frequently used approach is to simulate and/or optimise the impact of interventions in engineered systems. Modelling complex infrastructure systems can involve incorporating multiple aspects into a single model, for example physical, economic and political. This presents the challenge of combining research from diverse areas into a single system effectively. We present the Pynsim 'Python Network Simulator' framework, a library for building simulation models capable of representing, the physical, institutional and economic aspects of an engineered resources system. Pynsim is an open source, object oriented code aiming to promote integration of different modelling processes through a single code library. We present two case studies that demonstrate important features of Pynsim's design. The first is a large interdisciplinary project of a national water system in the Middle East with modellers from fields including water resources, economics, hydrology and geography each considering different facets of a multi agent system. It includes: modelling water supply and demand for households and farms; a water tanker market with transfer of water between farms and households, and policy decisions made by government institutions at district, national and international level. This study demonstrates that a well-structured library of code can provide a hub for development and act as a catalyst for integrating models. The second focuses on optimising the location of new run-of-river hydropower plants. Using a multi-objective evolutionary algorithm, this study analyses different network configurations to identify the optimal placement of new power plants within a river network. This demonstrates that Pynsim can be used to evaluate a multitude of topologies for identifying the optimal location of infrastructure investments. Pynsim is available on GitHub or via standard python installer packages such as pip. It comes with several examples and online documentation, making it attractive for those less experienced in software engineering.

  9. Generating performance portable geoscientific simulation code with Firedrake (Invited)

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Bercea, G.; Cotter, C. J.; Kelly, P. H.; Loriant, N.; Luporini, F.; McRae, A. T.; Mitchell, L.; Rathgeber, F.

    2013-12-01

    This presentation will demonstrate how a change in simulation programming paradigm can be exploited to deliver sophisticated simulation capability which is far easier to programme than are conventional models, is capable of exploiting different emerging parallel hardware, and is tailored to the specific needs of geoscientific simulation. Geoscientific simulation represents a grand challenge computational task: many of the largest computers in the world are tasked with this field, and the requirements of resolution and complexity of scientists in this field are far from being sated. However, single thread performance has stalled, even sometimes decreased, over the last decade, and has been replaced by ever more parallel systems: both as conventional multicore CPUs and in the emerging world of accelerators. At the same time, the needs of scientists to couple ever-more complex dynamics and parametrisations into their models makes the model development task vastly more complex. The conventional approach of writing code in low level languages such as Fortran or C/C++ and then hand-coding parallelism for different platforms by adding library calls and directives forces the intermingling of the numerical code with its implementation. This results in an almost impossible set of skill requirements for developers, who must simultaneously be domain science experts, numericists, software engineers and parallelisation specialists. Even more critically, it requires code to be essentially rewritten for each emerging hardware platform. Since new platforms are emerging constantly, and since code owners do not usually control the procurement of the supercomputers on which they must run, this represents an unsustainable development load. The Firedrake system, conversely, offers the developer the opportunity to write PDE discretisations in the high-level mathematical language UFL from the FEniCS project (http://fenicsproject.org). Non-PDE model components, such as parametrisations, can be written as short C kernels operating locally on the underlying mesh, with no explicit parallelism. The executable code is then generated in C, CUDA or OpenCL and executed in parallel on the target architecture. The system also offers features of special relevance to the geosciences. In particular, the large scale separation between the vertical and horizontal directions in many geoscientific processes can be exploited to offer the flexibility of unstructured meshes in the horizontal direction, without the performance penalty usually associated with those methods.

  10. Modeling of Radiowave Propagation in a Forested Environment

    DTIC Science & Technology

    2014-09-01

    is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Propagation models used in wireless communication system design play an...domains. Applications in both domains require communication devices and sensors to be operated in forested environments. Various methods have been...wireless communication system design play an important role in overall link performance. Propagation models in a forested environment, in particular

  11. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  12. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  13. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    PubMed

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  14. Modelling of the reactive transport for rock salt-brine in geological repository systems based on improved thermodynamic database (Invited)

    NASA Astrophysics Data System (ADS)

    Müller, W.; Alkan, H.; Xie, M.; Moog, H.; Sonnenthal, E. L.

    2009-12-01

    The release and migration of toxic contaminants from the disposed wastes is one of the main issues in long-term safety assessment of geological repositories. In the engineered and geological barriers around the nuclear waste emplacements chemical interactions between the components of the system may affect the isolation properties considerably. As the chemical issues change the transport properties in the near and far field of a nuclear repository, modelling of the transport should also take the chemistry into account. The reactive transport modelling consists of two main components: a code that combines the possible chemical reactions with thermo-hydrogeological processes interactively and a thermodynamic databank supporting the required parameters for the calculation of the chemical reactions. In the last decade many thermo-hydrogeological codes were upgraded to include the modelling of the chemical processes. TOUGHREACT is one of these codes. This is an extension of the well known simulator TOUGH2 for modelling geoprocesses. The code is developed by LBNL (Lawrence Berkeley National Laboratory, Univ. of California) for the simulation of the multi-phase transport of gas and liquid in porous media including heat transfer. After the release of its first version in 1998, this code has been applied and improved many times in conjunction with considerations for nuclear waste emplacement. A recent version has been extended to calculate ion activities in concentrated salt solutions applying the Pitzer model. In TOUGHREACT, the incorporated equation of state module ECO2N is applied as the EOS module for non-isothermal multiphase flow in a fluid system of H2O-NaCl-CO2. The partitioning of H2O and CO2 between liquid and gas phases is modelled as a function of temperature, pressure, and salinity. This module is applicable for waste repositories being expected to generate or having originally CO2 in the fluid system. The enhanced TOUGHREACT uses an EQ3/6-formatted database for both Pitzer ion-interaction parameters and thermodynamic equilibrium constants. The reliability of the parameters is as important as the accuracy of the modelling tool. For this purpose the project THEREDA (www.thereda.de)was set up. The project aims at a comprehensive and internally consistent thermodynamic reference database for geochemical modelling of near and far-field processes occurring in repositories for radioactive wastes in various host rock formations. In the framework of the project all data necessary to perform thermodynamic equilibrium calculations for elevated temperature in the system of oceanic salts are under revision, and it is expected that related data will be available for download by 2010-03. In this paper the geochemical issues that can play an essential role for the transport of radioactive contaminants within and around waste repositories are discussed. Some generic calculations are given to illustrate the geochemical interactions and their probable effects on the transport properties around HLW emplacements and on CO2 generating and/or containing repository systems.

  15. Computer modeling of the mineralogy of the Martian surface, as modified by aqueous alteration

    NASA Technical Reports Server (NTRS)

    Zolensky, M. E.; Bourcier, W. L.; Gooding, J. L.

    1988-01-01

    Mineralogical constraints can be placed on the Martian surface by assuming chemical equilibria among the surface rocks, atmosphere and hypothesized percolating groundwater. A study was made of possible Martian surface mineralogy, as modified by the action of aqueous alteration, using the EQ3/6 computer codes. These codes calculate gas fugacities, aqueous speciation, ionic strength, pH, Eh and concentration and degree of mineral saturation for complex aqueous systems. Thus, these codes are also able to consider mineralogical solid solutions. These codes are able to predict the likely alteration phases which will occur as the result of weathering on the Martian surface. Knowledge of the stability conditions of these phases will then assist in the definition of the specifications for the sample canister of the proposed Martian sample return mission. The model and its results are discussed.

  16. Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1

    NASA Technical Reports Server (NTRS)

    Wright, Michael J.; White, Todd; Mangini, Nancy

    2009-01-01

    Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.

  17. Stochastic many-body problems in ecology, evolution, neuroscience, and systems biology

    NASA Astrophysics Data System (ADS)

    Butler, Thomas C.

    Using the tools of many-body theory, I analyze problems in four different areas of biology dominated by strong fluctuations: The evolutionary history of the genetic code, spatiotemporal pattern formation in ecology, spatiotemporal pattern formation in neuroscience and the robustness of a model circadian rhythm circuit in systems biology. In the first two research chapters, I demonstrate that the genetic code is extremely optimal (in the sense that it manages the effects of point mutations or mistranslations efficiently), more than an order of magnitude beyond what was previously thought. I further show that the structure of the genetic code implies that early proteins were probably only loosely defined. Both the nature of early proteins and the extreme optimality of the genetic code are interpreted in light of recent theory [1] as evidence that the evolution of the genetic code was driven by evolutionary dynamics that were dominated by horizontal gene transfer. I then explore the optimality of a proposed precursor to the genetic code. The results show that the precursor code has only limited optimality, which is interpreted as evidence that the precursor emerged prior to translation, or else never existed. In the next part of the dissertation, I introduce a many-body formalism for reaction-diffusion systems described at the mesoscopic scale with master equations. I first apply this formalism to spatially-extended predator-prey ecosystems, resulting in the prediction that many-body correlations and fluctuations drive population cycles in time, called quasicycles. Most of these results were previously known, but were derived using the system size expansion [2, 3]. I next apply the analytical techniques developed in the study of quasi-cycles to a simple model of Turing patterns in a predator-prey ecosystem. This analysis shows that fluctuations drive the formation of a new kind of spatiotemporal pattern formation that I name "quasi-patterns." These quasi-patterns exist over a much larger range of physically accessible parameters than the patterns predicted in mean field theory and therefore account for the apparent observations in ecology of patterns in regimes where Turing patterns do not occur. I further show that quasi-patterns have statistical properties that allow them to be distinguished empirically from mean field Turing patterns. I next analyze a model of visual cortex in the brain that has striking similarities to the activator-inhibitor model of ecosystem quasi-pattern formation. Through analysis of the resulting phase diagram, I show that the architecture of the neural network in the visual cortex is configured to make the visual cortex robust to unwanted internally generated spatial structure that interferes with normal visual function. I also predict that some geometric visual hallucinations are quasi-patterns and that the visual cortex supports a new phase of spatially scale invariant behavior present far from criticality. In the final chapter, I explore the effects of fluctuations on cycles in systems biology, specifically the pervasive phenomenon of circadian rhythms. By exploring the behavior of a generic stochastic model of circadian rhythms, I show that the circadian rhythm circuit exploits leaky mRNA production to safeguard the cycle from failure. I also show that this safeguard mechanism is highly robust to changes in the rate of leaky mRNA production. Finally, I explore the failure of the deterministic model in two different contexts, one where the deterministic model predicts cycles where they do not exist, and another context in which cycles are not predicted by the deterministic model.

  18. Applications of the microdosimetric function implemented in the macroscopic particle transport simulation code PHITS.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji

    2012-01-01

    Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.

  19. Generalized gas-solid adsorption modeling: Single-component equilibria

    DOE PAGES

    Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas; ...

    2015-01-07

    Over the last several decades, modeling of gas–solid adsorption at equilibrium has generally been accomplished through the use of isotherms such as the Freundlich, Langmuir, Tóth, and other similar models. While these models are relatively easy to adapt for describing experimental data, their simplicity limits their generality to be used with many different sets of data. This limitation forces engineers and scientists to test each different model in order to evaluate which one can best describe their data. Additionally, the parameters of these models all have a different physical interpretation, which may have an effect on how they can bemore » further extended into kinetic, thermodynamic, and/or mass transfer models for engineering applications. Therefore, it is paramount to adopt not only a more general isotherm model, but also a concise methodology to reliably optimize for and obtain the parameters of that model. A model of particular interest is the Generalized Statistical Thermodynamic Adsorption (GSTA) isotherm. The GSTA isotherm has enormous flexibility, which could potentially be used to describe a variety of different adsorption systems, but utilizing this model can be fairly difficult due to that flexibility. To circumvent this complication, a comprehensive methodology and computer code has been developed that can perform a full equilibrium analysis of adsorption data for any gas-solid system using the GSTA model. The code has been developed in C/C++ and utilizes a Levenberg–Marquardt’s algorithm to handle the non-linear optimization of the model parameters. Since the GSTA model has an adjustable number of parameters, the code iteratively goes through all number of plausible parameters for each data set and then returns the best solution based on a set of scrutiny criteria. Data sets at different temperatures are analyzed serially and then linear correlations with temperature are made for the parameters of the model. The end result is a full set of optimal GSTA parameters, both dimensional and non-dimensional, as well as the corresponding thermodynamic parameters necessary to predict the behavior of the system at temperatures for which data were not available. It will be shown that this code, utilizing the GSTA model, was able to describe a wide variety of gas-solid adsorption systems at equilibrium.In addition, a physical interpretation of these results will be provided, as well as an alternate derivation of the GSTA model, which intends to reaffirm the physical meaning.« less

  20. Development of Graphical User Interface for ARRBOD (Acute Radiation Risk and BRYNTRN Organ Dose Projection)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.

    2010-01-01

    The space radiation environment, particularly solar particle events (SPEs), poses the risk of acute radiation sickness (ARS) to humans; and organ doses from SPE exposure may reach critical levels during extra vehicular activities (EVAs) or within lightly shielded spacecraft. NASA has developed an organ dose projection model using the BRYNTRN with SUMDOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUMDOSE, written in FORTRAN, are a Baryon transport code and an output data processing code, respectively. The ARR code is written in C. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. BRYNTRN code operation requires extensive input preparation. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN in friendly way. A GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. The ARRBOD GUI will serve as a proof-of-concept example for future integration of other human space applications risk projection models. The current version of the ARRBOD GUI is a new self-contained product and will have follow-on versions, as options are added: 1) human geometries of MAX/FAX in addition to CAM/CAF; 2) shielding distributions for spacecraft, Mars surface and atmosphere; 3) various space environmental and biophysical models; and 4) other response models to be connected to the BRYNTRN. The major components of the overall system, the subsystem interconnections, and external interfaces are described in this report; and the ARRBOD GUI product is explained step by step in order to serve as a tutorial.

Top