Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, C.E.
Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.
Posttest analysis of international standard problem 10 using RELAP4/MOD7. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, M.; Davis, C.B.; Peterson, A.C. Jr.
RELAP4/MOD7, a best estimate computer code for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This paper evaluates the capability of RELAP4/MOD7 to calculate refill/reflood phenomena. This evaluation uses the data of International Standard Problem 10, which is based on West Germany's KWU PKL refill/reflood experiment K9A. The PKL test facility represents a typical West German four-loop, 1300 MW pressurized water reactor (PWR) in reduced scale while maintaining prototypical volume-to-power ratio. The PKL facility was designed to specifically simulate the refill/reflood phase of amore » hypothetical loss-of-coolant accident (LOCA).« less
Pretest mediction of Semiscale Test S-07-10 B. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobbe, C A
A best estimate prediction of Semiscale Test S-07-10B was performed at INEL by EG and G Idaho as part of the RELAP4/MOD6 code assessment effort and as the Nuclear Regulatory Commission pretest calculation for the Small Break Experiment. The RELAP4/MOD6 Update 4 and the RELAP4/MOD7 computer codes were used to analyze Semiscale Test S-07-10B, a 10% communicative cold leg break experiment. The Semiscale Mod-3 system utilized an electrially heated simulated core operating at a power level of 1.94 MW. The initial system pressure and temperature in the upper plenum was 2276 psia and 604/sup 0/F, respectively.
Simulation of German PKL refill/reflood experiment K9A using RELAP4/MOD7. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, M.T.; Davis, C.B.; Behling, S.R.
This paper describes a RELAP4/MOD7 simulation of West Germany's Kraftwerk Union (KWU) Primary Coolant Loop (PKL) refill/reflood experiment K9A. RELAP4/MOD7, a best-estimate computer program for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This study was the first major simulation using RELAP4/MOD7 since its release by the Idaho National Engineering Laboratory (INEL). The PKL facility is a reduced scale (1:134) representation of a typical West German four-loop 1300 MW pressurized water reactor (PWR). A prototypical scale of the total volume to power ratio wasmore » maintained. The test facility was designed specifically for an experiment simulating the refill/reflood phase of a Loss-of-Coolant Accident (LOCA).« less
Methodology, status, and plans for development and assessment of the RELAP5 code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, G.W.; Riemke, R.A.
1997-07-01
RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Andrs; Ray Berry; Derek Gaston
The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Berry, R. A.; Martineau, R. C.
The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less
RELAP-7 Code Assessment Plan and Requirement Traceability Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.
2016-10-01
The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less
RELAP-7 Software Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling
This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less
System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng Lin; Dong Hou; Zhihong Xu
2006-07-01
Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, justmore » can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is modeled by RELAP5 code, and its main control and protection system is duplicated by Matlab/Simulink. Some steady states and transients are calculated under control of these I and C systems, and the results are compared with the plant test curves. The application showed that it can do exact system simulation of NPPs by coupling RELAP5 and Matlab/Simulink. This paper will mainly focus on the coupling method, plant thermal-hydraulic model, main control logics, test and application results. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. George L Mesina
Our ultimate goal is to create and maintain RELAP5-3D as the best software tool available to analyze nuclear power plants. This begins with writing excellent programming and requires thorough testing. This document covers development of RELAP5-3D software, the behavior of the RELAP5-3D program that must be maintained, and code testing. RELAP5-3D must perform in a manner consistent with previous code versions with backward compatibility for the sake of the users. Thus file operations, code termination, input and output must remain consistent in form and content while adding appropriate new files, input and output as new features are developed. As computermore » hardware, operating systems, and other software change, RELAP5-3D must adapt and maintain performance. The code must be thoroughly tested to ensure that it continues to perform robustly on the supported platforms. The coding must be written in a consistent manner that makes the program easy to read to reduce the time and cost of development, maintenance and error resolution. The programming guidelines presented her are intended to institutionalize a consistent way of writing FORTRAN code for the RELAP5-3D computer program that will minimize errors and rework. A common format and organization of program units creates a unifying look and feel to the code. This in turn increases readability and reduces time required for maintenance, development and debugging. It also aids new programmers in reading and understanding the program. Therefore, when undertaking development of the RELAP5-3D computer program, the programmer must write computer code that follows these guidelines. This set of programming guidelines creates a framework of good programming practices, such as initialization, structured programming, and vector-friendly coding. It sets out formatting rules for lines of code, such as indentation, capitalization, spacing, etc. It creates limits on program units, such as subprograms, functions, and modules. It establishes documentation guidance on internal comments. The guidelines apply to both existing and new subprograms. They are written for both FORTRAN 77 and FORTRAN 95. The guidelines are not so rigorous as to inhibit a programmer’s unique style, but do restrict the variations in acceptable coding to create sufficient commonality that new readers will find the coding in each new subroutine familiar. It is recognized that this is a “living” document and must be updated as languages, compilers, and computer hardware and software evolve.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.
2005-09-15
The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less
RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
2015-10-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.
Thermal hydraulic-severe accident code interfaces for SCDAP/RELAP5/MOD3.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coryell, E.W.; Siefken, L.J.; Harvego, E.A.
1997-07-01
The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The code is the result of merging the RELAP5, SCDAP, and COUPLE codes. The RELAP5 portion of the code calculates the overall reactor coolant system, thermal-hydraulics, and associated reactor system responses. The SCDAP portion of the code describes the response of the core and associated vessel structures.more » The COUPLE portion of the code describes response of lower plenum structures and debris and the failure of the lower head. The code uses a modular approach with the overall structure, input/output processing, and data structures following the pattern established for RELAP5. The code uses a building block approach to allow the code user to easily represent a wide variety of systems and conditions through a powerful input processor. The user can represent a wide variety of experiments or reactor designs by selecting fuel rods and other assembly structures from a range of representative core component models, and arrange them in a variety of patterns within the thermalhydraulic network. The COUPLE portion of the code uses two-dimensional representations of the lower plenum structures and debris beds. The flow of information between the different portions of the code occurs at each system level time step advancement. The RELAP5 portion of the code describes the fluid transport around the system. These fluid conditions are used as thermal and mass transport boundary conditions for the SCDAP and COUPLE structures and debris beds.« less
RELAP5-3D Developmental Assessment: Comparison of Versions 4.3.4i and 4.2.1i
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
2015-10-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code using versions 4.3.4i and 4.2.1i. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions changed between these two code versions and can be used to identify cases in which the assessment judgment may need to be changed in Volume III of the code manual. Changes to the assessment judgments made after reviewing allmore » of the assessment cases are also provided.« less
RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul D.
2014-06-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.
RELAP5-3D Developmental Assessment: Comparison of Versions 4.2.1i and 4.1.3i
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul D.
2014-06-01
Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code using versions 4.2.1i and 4.1.3i. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions changed between these two code versions and can be used to identify cases in which the assessment judgment may need to be changed in Volume III of the code manual. Changes to the assessment judgments made after reviewing allmore » of the assessment cases are also provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongbin; Zhao, Haihua; Gleicher, Frederick Nathan
RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory, and is the next generation tool in the RELAP reactor safety/systems analysis application series. RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is beingmore » developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). During Fiscal Year (FY) 2015, the RELAP-7 code has been further improved with expanded capability to support boiling water reactor (BWR) and pressurized water reactor NPPs analysis. The accumulator model has been developed. The code has also been coupled with other MOOSE-based applications such as neutronics code RattleSnake and fuel performance code BISON to perform multiphysics analysis. A major design requirement for the implicit algorithm in RELAP-7 is that it is capable of second-order discretization accuracy in both space and time, which eliminates the traditional first-order approximation errors. The second-order temporal is achieved by a second-order backward temporal difference, and the one-dimensional second-order accurate spatial discretization is achieved with the Galerkin approximation of Lagrange finite elements. During FY-2015, we have done numerical verification work to verify that the RELAP-7 code indeed achieves 2nd-order accuracy in both time and space for single phase models at the system level.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Ray Alden; Zou, Ling; Zhao, Haihua
This document summarizes the physical models and mathematical formulations used in the RELAP-7 code. In summary, the MOOSE based RELAP-7 code development is an ongoing effort. The MOOSE framework enables rapid development of the RELAP-7 code. The developmental efforts and results demonstrate that the RELAP-7 project is on a path to success. This theory manual documents the main features implemented into the RELAP-7 code. Because the code is an ongoing development effort, this RELAP-7 Theory Manual will evolve with periodic updates to keep it current with the state of the development, implementation, and model additions/revisions.
Peer review of RELAP5/MOD3 documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craddick, W.G.
1993-12-31
A peer review was performed on a portion of the documentation of the RELAP5/MOD3 computer code. The review was performed in two phases. The first phase was a review of Volume 3, Developmental Assessment problems, and Volume 4, Models and Correlations. The reviewers for this phase were Dr. Peter Griffith, Dr. Yassin Hassan, Dr. Gerald S. Lellouche, Dr. Marino di Marzo and Mr. Mark Wendel. The reviewers recommended a number of improvements, including using a frozen version of the code for assessment guided by a validation plan, better justification for flow regime maps and extension of models beyond their datamore » base. The second phase was a review of Volume 6, Quality Assurance of Numerical Techniques in RELAP5/MOD3. The reviewers for the second phase were Mr. Mark Wendel and Dr. Paul T. Williams. Recommendations included correction of numerous grammatical and typographical errors and better justification for the use of Lax`s Equivalence Theorem.« less
RELAP5-3D Resolution of Known Restart/Backup Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesina, George L.; Anderson, Nolan A.
2014-12-01
The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virtanen, E.; Haapalehto, T.; Kouhia, J.
1995-09-01
Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less
NASA Astrophysics Data System (ADS)
Antariksawan, Anhar R.; Wahyono, Puradwi I.; Taxwim
2018-02-01
Safety is the priority for nuclear installations, including research reactors. On the other hand, many studies have been done to validate the applicability of nuclear power plant based best estimate computer codes to the research reactor. This study aims to assess the applicability of the RELAP5/SCDAP code to Kartini research reactor. The model development, steady state and transient due to LOCA calculations have been conducted by using RELAP5/SCDAP. The calculation results are compared with available measurements data from Kartini research reactor. The results show that the RELAP5/SCDAP model steady state calculation agrees quite well with the available measurement data. While, in the case of LOCA transient simulations, the model could result in reasonable physical phenomena during the transient showing the characteristics and performances of the reactor against the LOCA transient. The role of siphon breaker hole and natural circulation in the reactor tank as passive system was important to keep reactor in safe condition. It concludes that the RELAP/SCDAP could be use as one of the tool to analyse the thermal-hydraulic safety of Kartini reactor. However, further assessment to improve the model is still needed.
IJS procedure for RELAP5 to TRACE input model conversion using SNAP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prosek, A.; Berar, O. A.
2012-07-01
The TRAC/RELAP Advanced Computational Engine (TRACE) advanced, best-estimate reactor systems code developed by the U.S. Nuclear Regulatory Commission comes with a graphical user interface called Symbolic Nuclear Analysis Package (SNAP). Much of efforts have been done in the past to develop the RELAP5 input decks. The purpose of this study is to demonstrate the Institut 'Josef Stefan' (IJS) conversion procedure from RELAP5 to TRACE input model of BETHSY facility. The IJS conversion procedure consists of eleven steps and is based on the use of SNAP. For calculations of the selected BETHSY 6.2TC test the RELAP5/MOD3.3 Patch 4 and TRACE V5.0more » Patch 1 were used. The selected BETHSY 6.2TC test was 15.24 cm equivalent diameter horizontal cold leg break in the reference pressurized water reactor without high pressure and low pressure safety injection. The application of the IJS procedure for conversion of BETHSY input model showed that it is important to perform the steps in proper sequence. The overall calculated results obtained with TRACE using the converted RELAP5 model were close to experimental data and comparable to RELAP5/MOD3.3 calculations. Therefore it can be concluded, that proposed IJS conversion procedure was successfully demonstrated on the BETHSY integral test facility input model. (authors)« less
Posttest RELAP5 simulations of the Semiscale S-UT series experiments. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leonard, M.T.
The RELAP5/MOD1 computer code was used to perform posttest calculations, simulating six experiments, run in the Semiscale Mod-2A facility, investigating the effects of upper head injection on small break transient behavior. The results of these calculations and corresponding test data are presented in this report. An evaluation is made of the capability of RELAP5 to calculate the thermal-hydraulic response of the Mod-2A system over a spectrum of break sizes, with and without the use of upper head injection.
Break modeling for RELAP5 analyses of ISP-27 Bethsy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petelin, S.; Gortnar, O.; Mavko, B.
This paper presents pre- and posttest analyses of International Standard Problem (ISP) 27 on the Bethsy facility and separate RELAP5 break model tests considering the measured boundary condition at break inlet. This contribution also demonstrates modifications which have assured the significant improvement of model response in posttest simulations. Calculations were performed using the RELAP5/MOD2/36.05 and RELAP5/MOD3.5M5 codes on the MicroVAX, SUN, and CONVEX computers. Bethsy is an integral test facility that simulates a typical 900-MW (electric) Framatome pressurized water reactor. The ISP-27 scenario involves a 2-in. cold-leg break without HPSI and with delayed operator procedures for secondary system depressurization.
Posttest RELAP4 analysis of LOFT experiment L1-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grush, W.H.; Holmstrom, H.L.O.
Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezsoel, G.; Guba, A.; Perneczky, L.
Results of a small-break loss-of-coolant accident experiment, conducted on the PMK-2 integral-type test facility are presented. The experiment simulated a 1% break in the cold leg of a VVER-440-type reactor. The main phenomena of the experiment are discussed, and in the case of selected events, a more detailed interpretation with the help of measured void fraction, obtained by a special measurement device, is given. Two thermohydraulic computer codes, RELAP5 and ATHLET, are used for posttest calculations. The aim of these calculations is to investigate the code capability for modeling natural circulation phenomena in VVER-440-type reactors. Therefore, the results of themore » experiment and both calculations are compared. Both codes predict most of the transient events well, with the exception that RELAP5 fails to predict the dryout period in the core. In the experiment, the hot- and cold-leg loop-seal clearing is accompanied by natural circulation instabilities, which can be explained by means of the ATHLET calculation.« less
An assessment of RELAP5-3D using the Edwards-O'Brien Blowdown problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; Aumiller, D.L.
1999-07-01
The RELAP5-3D (version bt) computer code was used to assess the United States Nuclear Regulatory Commission's Standard Problem 1 (Edwards-O'Brien Blowdown Test). The RELAP5-3D standard installation problem based on the Edwards-O'Brien Blowdown Test was modified to model the appropriate initial conditions and to represent the proper location of the instruments present in the experiment. The results obtained using the modified model are significantly different from the original calculation indicating the need to model accurately the experimental conditions if an accurate assessment of the calculational model is to be obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon; Smith, Curtis Lee
2016-09-01
This document addresses two subjects involved with the RELAP-7 Software Verification and Validation Plan (SVVP): (i) the principles and plan to assure the independence of RELAP-7 assessment through the code development process, and (ii) the work performed to establish the RELAP-7 assessment plan, i.e., the assessment strategy, literature review, and identification of RELAP-7 requirements. Then, the Requirements Traceability Matrices (RTMs) proposed in previous document (INL-EXT-15-36684) are updated. These RTMs provide an efficient way to evaluate the RELAP-7 development status as well as the maturity of RELAP-7 assessment through the development process.
Posttest analysis of LOFT LOCE L2-3 using the ESA RELAP4 blowdown model. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perryman, J.L.; Samuels, T.K.; Cooper, C.H.
A posttest analysis of the blowdown portion of Loss-of-Coolant Experiment (LOCE) L2-3, which was conducted in the Loss-of-Fluid Test (LOFT) facility, was performed using the experiment safety analysis (ESA) RELAP4/MOD5 computer model. Measured experimental parameters were compared with the calculations in order to assess the conservatisms in the ESA RELAP4/MOD5 model.
RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H
2010-06-01
ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less
IMPLEMENTATION AND VALIDATION OF A FULLY IMPLICIT ACCUMULATOR MODEL IN RELAP-7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Haihua; Zou, Ling; Zhang, Hongbin
2016-01-01
This paper presents the implementation and validation of an accumulator model in RELAP-7 under the framework of preconditioned Jacobian free Newton Krylov (JFNK) method, based on the similar model used in RELAP5. RELAP-7 is a new nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). RELAP-7 is a fully implicit system code. The JFNK and preconditioning methods used in RELAP-7 is briefly discussed. The slightly modified accumulator model is summarized for completeness. The implemented model was validated with LOFT L3-1 test and benchmarked with RELAP5 results. RELAP-7 and RELAP5 had almost identical results for themore » accumulator gas pressure and water level, although there were some minor difference in other parameters such as accumulator gas temperature and tank wall temperature. One advantage of the JFNK method is its easiness to maintain and modify models due to fully separation of numerical methods from physical models. It would be straightforward to extend the current RELAP-7 accumulator model to simulate the advanced accumulator design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon
The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.
Metal-water reaction and cladding deformation models for RELAP5/MOD3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caraher, D.L.; Shumway, R.W.
1989-06-01
A model for calculating the reaction of zirconium with steam according to the Cathcart-Pawel correlation has been incorporated into RELAP5/MOD3. A cladding deformation model which computes swelling and rupture of the cladding according to the empirical correlations for Powers and Meyer has also been incorporated into RELAP5/MOD3. This report gives the background of the models, documents their implantation into the RELAP5 subroutines, and reports the developmental assessment done on the models. 4 refs., 9 figs., 9 tabs.
Test prediction for the German PKL Test K5A using RELAP4/MOD6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.S.; Haigh, W.S.; Sullivan, L.H.
RELAP4/MOD6 is the most recent modification in the series of RELAP4 computer programs developed to describe the thermal-hydraulic conditions attendant to postulated transients in light water reactor systems. The major new features in RELAP4/MOD6 include best-estimate pressurized water reactor (PWR) reflood transient analytical models for core heat transfer, local entrainment, and core vapor superheat, and a new set of heat transfer correlations for PWR blowdown and reflood. These new features were used for a test prediction of the Kraftwerk Union three-loop PRIMAR KREISLAUF (PKL) Reflood Test K5A. The results of the prediction were in good agreement with the experimental thermalmore » and hydraulic system data. Comparisons include heater rod surface temperature, system pressure, mass flow rates, and core mixture level. It is concluded that RELAP4/MOD6 is capable of accurately predicting transient reflood phenomena in the 200% cold-leg break test configuration of the PKL reflood facility.« less
Development of Fuel Shuffling Module for PHISICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allan Mabe; Andrea Alfonsi; Cristian Rabiti
2013-06-01
PHISICS (Parallel and Highly Innovative Simulation for the INL Code System) [4] code toolkit has been in development at the Idaho National Laboratory. This package is intended to provide a modern analysis tool for reactor physics investigation. It is designed with the mindset to maximize accuracy for a given availability of computational resources and to give state of the art tools to the modern nuclear engineer. This is obtained by implementing several different algorithms and meshing approaches among which the user will be able to choose, in order to optimize his computational resources and accuracy needs. The software is completelymore » modular in order to simplify the independent development of modules by different teams and future maintenance. The package is coupled with the thermo-hydraulic code RELAP5-3D [3]. In the following the structure of the different PHISICS modules is briefly recalled, focusing on the new shuffling module (SHUFFLE), object of this paper.« less
Pump-stopping water hammer simulation based on RELAP5
NASA Astrophysics Data System (ADS)
Yi, W. S.; Jiang, J.; Li, D. D.; Lan, G.; Zhao, Z.
2013-12-01
RELAP5 was originally designed to analyze complex thermal-hydraulic interactions that occur during either postulated large or small loss-of-coolant accidents in PWRs. However, as development continued, the code was expanded to include many of the transient scenarios that might occur in thermal-hydraulic systems. The fast deceleration of the liquid results in high pressure surges, thus the kinetic energy is transformed into the potential energy, which leads to the temporary pressure increase. This phenomenon is called water hammer. Generally water hammer can occur in any thermal-hydraulic systems and it is extremely dangerous for the system when the pressure surges become considerably high. If this happens and when the pressure exceeds the critical pressure that the pipe or the fittings along the pipeline can burden, it will result in the failure of the whole pipeline integrity. The purpose of this article is to introduce the RELAP5 to the simulation and analysis of water hammer situations. Based on the knowledge of the RELAP5 code manuals and some relative documents, the authors utilize RELAP5 to set up an example of water-supply system via an impeller pump to simulate the phenomena of the pump-stopping water hammer. By the simulation of the sample case and the subsequent analysis of the results that the code has provided, we can have a better understand of the knowledge of water hammer as well as the quality of the RELAP5 code when it's used in the water-hammer fields. In the meantime, By comparing the results of the RELAP5 based model with that of other fluid-transient analysis software say, PIPENET. The authors make some conclusions about the peculiarity of RELAP5 when transplanted into water-hammer research and offer several modelling tips when use the code to simulate a water-hammer related case.
Code Development in Coupled PARCS/RELAP5 for Supercritical Water Reactor
Hu, Po; Wilson, Paul
2014-01-01
The new capability is added to the existing coupled code package PARCS/RELAP5, in order to analyze SCWR design under supercritical pressure with the separated water coolant and moderator channels. This expansion is carried out on both codes. In PARCS, modification is focused on extending the water property tables to supercritical pressure, modifying the variable mapping input file and related code module for processing thermal-hydraulic information from separated coolant/moderator channels, and modifying neutronics feedback module to deal with the separated coolant/moderator channels. In RELAP5, modification is focused on incorporating more accurate water properties near SCWR operation/transient pressure and temperature in themore » code. Confirming tests of the modifications is presented and the major analyzing results from the extended codes package are summarized.« less
Verification and Validation Strategy for LWRS Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carl M. Stoots; Richard R. Schultz; Hans D. Gougar
2012-09-01
One intension of the Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program is to create advanced computational tools for safety assessment that enable more accurate representation of a nuclear power plant safety margin. These tools are to be used to study the unique issues posed by lifetime extension and relicensing of the existing operating fleet of nuclear power plants well beyond their first license extension period. The extent to which new computational models / codes such as RELAP-7 can be used for reactor licensing / relicensing activities depends mainly upon the thoroughness with which they have been verifiedmore » and validated (V&V). This document outlines the LWRS program strategy by which RELAP-7 code V&V planning is to be accomplished. From the perspective of developing and applying thermal-hydraulic and reactivity-specific models to reactor systems, the US Nuclear Regulatory Commission (NRC) Regulatory Guide 1.203 gives key guidance to numeric model developers and those tasked with the validation of numeric models. By creating Regulatory Guide 1.203 the NRC defined a framework for development, assessment, and approval of transient and accident analysis methods. As a result, this methodology is very relevant and is recommended as the path forward for RELAP-7 V&V. However, the unique issues posed by lifetime extension will require considerations in addition to those addressed in Regulatory Guide 1.203. Some of these include prioritization of which plants / designs should be studied first, coupling modern supporting experiments to the stringent needs of new high fidelity models / codes, and scaling of aging effects.« less
Initial Coupling of the RELAP-7 and PRONGHORN Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Ortensi; D. Andrs; A.A. Bingham
2012-10-01
Modern nuclear reactor safety codes require the ability to solve detailed coupled neutronic- thermal fluids problems. For larger cores, this implies fully coupled higher dimensionality spatial dynamics with appropriate feedback models that can provide enough resolution to accurately compute core heat generation and removal during steady and unsteady conditions. The reactor analysis code PRONGHORN is being coupled to RELAP-7 as a first step to extend RELAP’s current capabilities. This report details the mathematical models, the type of coupling, and the testing results from the integrated system. RELAP-7 is a MOOSE-based application that solves the continuity, momentum, and energy equations inmore » 1-D for a compressible fluid. The pipe and joint capabilities enable it to model parts of the power conversion unit. The PRONGHORN application, also developed on the MOOSE infrastructure, solves the coupled equations that define the neutron diffusion, fluid flow, and heat transfer in a full core model. The two systems are loosely coupled to simplify the transition towards a more complex infrastructure. The integration is tested on a simplified version of the OECD/NEA MHTGR-350 Coupled Neutronics-Thermal Fluids benchmark model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, J.J.
1995-12-31
This study compares results obtained with two U.S. Nuclear Regulatory Commission (NRC)-sponsored codes, MELCOR version 1.8.3 (1.8PQ) and SCDAP/RELAP5 Mod3.1 release C, for the same transient - a low-pressure, short-term station blackout accident at the Browns Ferry nuclear plant. This work is part of MELCOR assessment activities to compare core damage progression calculations of MELCOR against SCDAP/RELAP5 since the two codes model core damage progression very differently.
Current and anticipated uses of thermal-hydraulic codes in Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Sommer, F.; Depisch, F.
1997-07-01
In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.
Assessment of PWR Steam Generator modelling in RELAP5/MOD2. International Agreement Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putney, J.M.; Preece, R.J.
1993-06-01
An assessment of Steam Generator (SG) modelling in the PWR thermal-hydraulic code RELAP5/MOD2 is presented. The assessment is based on a review of code assessment calculations performed in the UK and elsewhere, detailed calculations against a series of commissioning tests carried out on the Wolf Creek PWR and analytical investigations of the phenomena involved in normal and abnormal SG operation. A number of modelling deficiencies are identified and their implications for PWR safety analysis are discussed -- including methods for compensating for the deficiencies through changes to the input deck. Consideration is also given as to whether the deficiencies willmore » still be present in the successor code RELAP5/MOD3.« less
Modeling of the Edwards pipe experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiselj, I.; Petelin, S.
1995-12-31
The Edwards pipe experiment is used as one of the basic benchmarks for the two-phase flow codes due to its simple geometry and the wide range of phenomena that it covers. Edwards and O`Brien filled 4-m-long pipe with liquid water at 7 MPa and 502 K and ruptured one end of the tube. They measured pressure and void fraction during the blowdown. Important phenomena observed were pressure rarefaction wave, flashing onset, critical two-phase flow, and void fraction wave. Experimental data were used to analyze the capabilities of the RELAP5/MOD3.1 six-equation two-phase flow model and to examine two different numerical schemes:more » one from the RELAP5/MOD3.1 code and one from our own code, which was based on characteristic upwind discretization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi
2012-10-01
PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less
Post-test analysis of PIPER-ONE PO-IC-2 experiment by RELAP5/MOD3 codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovalini, R.; D`Auria, F.; Galassi, G.M.
1996-11-01
RELAP5/MOD3.1 was applied to the PO-IC-2 experiment performed in PIPER-ONE facility, which has been modified to reproduce typical isolation condenser thermal-hydraulic conditions. RELAP5 is a well known code widely used at the University of Pisa during the past seven years. RELAP5/MOD3.1 was the latest version of the code made available by the Idaho National Engineering Laboratory at the time of the reported study. PIPER-ONE is an experimental facility simulating a General Electric BWR-6 with volume and height scaling ratios of 1/2,200 and 1./1, respectively. In the frame of the present activity a once-through heat exchanger immersed in a pool ofmore » ambient temperature water, installed approximately 10 m above the core, was utilized to reproduce qualitatively the phenomenologies expected for the Isolation Condenser in the simplified BWR (SBWR). The PO-IC-2 experiment is the flood up of the PO-SD-8 and has been designed to solve some of the problems encountered in the analysis of the PO-SD-8 experiment. A very wide analysis is presented hereafter including the use of different code versions.« less
NASA Astrophysics Data System (ADS)
Class, G.; Meyder, R.; Stratmanns, E.
1985-12-01
The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less
A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit
Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...
2015-05-17
In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Haihua; Zhang, Hongbin; Zou, Ling
2014-10-01
The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The RELAP-7 code develop-ment effort started in October of 2011 and by the end of the second development year, a number of physical components with simplified two phase flow capability have been de-veloped to support the simplified boiling water reactor (BWR) extended station blackout (SBO) analyses. The demonstration case includes the major components for the primary system of a BWR, as well as the safety system components for the safety relief valve (SRV), the reactor core isolation cooling (RCIC)more » system, and the wet well. Three scenar-ios for the SBO simulations have been considered. Since RELAP-7 is not a severe acci-dent analysis code, the simulation stops when fuel clad temperature reaches damage point. Scenario I represents an extreme station blackout accident without any external cooling and cooling water injection. The system pressure is controlled by automatically releasing steam through SRVs. Scenario II includes the RCIC system but without SRV. The RCIC system is fully coupled with the reactor primary system and all the major components are dynamically simulated. The third scenario includes both the RCIC system and the SRV to provide a more realistic simulation. This paper will describe the major models and dis-cuss the results for the three scenarios. The RELAP-7 simulations for the three simplified SBO scenarios show the importance of dynamically simulating the SRVs, the RCIC sys-tem, and the wet well system to the reactor safety during extended SBO accidents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, R.E.
Results are presented of studies conducted by Aerojet Nuclear Company (ANC) in FY 1975 to support the Nuclear Regulatory Commission (NRC) on the boiling water reactor blowdown heat transfer (BWR-BDHT) program. The support provided by ANC is that of an independent assessor of the program to ensure that the data obtained are adequate for verification of analytical models used for predicting reactor response to a postulated loss-of-coolant accident. The support included reviews of program plans, objectives, measurements, and actual data. Additional activity included analysis of experimental system performance and evaluation of the RELAP4 computer code as applied to the experiments.
Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...
2015-12-02
The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua
2014-11-01
Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper representsmore » an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation environment such as RELAP-7. • Identify the risk-significant passive components, their failure modes and anticipated rates of degradation • Incorporate surveillance and maintenance activities and their effects into the plant state and into component aging progress. • Asses aging affects in a dynamic simulation environment 1. C. L. SMITH, V. N. SHAH, T. KAO, G. APOSTOLAKIS, “Incorporating Ageing Effects into Probabilistic Risk Assessment –A Feasibility Study Utilizing Reliability Physics Models,” NUREG/CR-5632, USNRC, (2001). 2. T. ALDEMIR, “A Survey of Dynamic Methodologies for Probabilistic Safety Assessment of Nuclear Power Plants, Annals of Nuclear Energy, 52, 113-124, (2013). 3. C. RABITI, A. ALFONSI, J. COGLIATI, D. MANDELLI and R. KINOSHITA “Reactor Analysis and Virtual Control Environment (RAVEN) FY12 Report,” INL/EXT-12-27351, (2012). 4. D. ANDERS et.al, "RELAP-7 Level 2 Milestone Report: Demonstration of a Steady State Single Phase PWR Simulation with RELAP-7," INL/EXT-12-25924, (2012).« less
RELAP-7 Progress Report. FY-2015 Optimization Activities Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Ray Alden; Zou, Ling; Andrs, David
2015-09-01
This report summarily documents the optimization activities on RELAP-7 for FY-2015. It includes the migration from the analytical stiffened gas equation of state for both the vapor and liquid phases to accurate and efficient property evaluations for both equilibrium and metastable (nonequilibrium) states using the Spline-Based Table Look-up (SBTL) method with the IAPWS-95 properties for steam and water. It also includes the initiation of realistic closure models based, where appropriate, on the U.S. Nuclear Regulatory Commission’s TRACE code. It also describes an improved entropy viscosity numerical stabilization method for the nonequilibrium two-phase flow model of RELAP-7. For ease of presentationmore » to the reader, the nonequilibrium two-phase flow model used in RELAP-7 is briefly presented, though for detailed explanation the reader is referred to RELAP-7 Theory Manual [R.A. Berry, J.W. Peterson, H. Zhang, R.C. Martineau, H. Zhao, L. Zou, D. Andrs, “RELAP-7 Theory Manual,” Idaho National Laboratory INL/EXT-14-31366(rev. 1), February 2014].« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchibori, Akihiro; Kurihara, Akikazu; Ohshima, Hiroyuki
A multiphysics analysis system for sodium-water reaction phenomena in a steam generator of sodium-cooled fast reactors was newly developed. The analysis system consists of the mechanistic numerical analysis codes, SERAPHIM, TACT, and RELAP5. The SERAPHIM code calculates the multicomponent multiphase flow and sodium-water chemical reaction caused by discharging of pressurized water vapor. Applicability of the SERAPHIM code was confirmed through the analyses of the experiment on water vapor discharging in liquid sodium. The TACT code was developed to calculate heat transfer from the reacting jet to the adjacent tube and to predict the tube failure occurrence. The numerical models integratedmore » into the TACT code were verified through some related experiments. The RELAP5 code evaluates thermal hydraulic behavior of water inside the tube. The original heat transfer correlations were corrected for the tube rapidly heated by the reacting jet. The developed system enables evaluation of the wastage environment and the possibility of the failure propagation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arroyo, R.; Rebollo, L.
1993-06-01
This document presents the comparison between the simulation results and the plant measurements of a real event that took place in JOSE CABRERA nuclear power plant in August 30th, 1984. The event was originated by the total, continuous and inadverted opening of the pressurizer spray valve PCV-400A. JOSE CABRERA power plant is a single loop Westinghouse PWR belonging to UNION ELECTRICA FENOSA, S.A. (UNION FENOSA), an Spanish utility which participates in the International Code Assessment and Applications Program (ICAP) as a member of UNIDAD ELECTRICA, S.A. (UNESA). This is the second of its two contributions to the Program: the firstmore » one was an application case and this is an assessment one. The simulation has been performed using the RELAP5/MOD2 cycle 36.04 code, running on a CDC CYBER 180/830 computer under NOS 2.5 operating system. The main phenomena have been calculated correctly and some conclusions about the 3D characteristics of the condensation due to the spray and its simulation with a 1D tool have been got.« less
RELAP5 posttest calculation of IAEA-SPE-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petelin, S.; Mavko, B.; Parzer, I.
The International Atomic Energy Agency`s Fourth Standard Problem Exercise (IAEA-SPE-4) was performed at the PMK-2 facility. The PMK-2 facility is designed to study processes following small- and medium-size breaks in the primary system and natural circulation in VVER-440 plants. The IAEA-SPE-4 experiment represents a cold-leg side small break, similar to the IAEA-SPE-2, with the exception of the high-pressure safety injection being unavailable, and the secondary side bleed and feed initiation. The break valve was located at the dead end of a vertical downcomer, which in fact simulates a break in the reactor vessel itself, and should be unlikely to happenmore » in a real nuclear power plant (NPP). Three different RELAP5 code versions were used for the transient simulation in order to assess the calculations with test results.« less
Analyses of 1/15 scale Creare bypass transient experiments. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kmetyk, L.N.; Buxton, L.D.; Cole, R.K. Jr.
1982-09-01
RELAP4 analyses of several 1/15 scale Creare H-series bypass transient experiments have been done to investigate the effect of using different downcomer nodalizations, physical scales, slip models, and vapor fraction donoring methods. Most of the analyses were thermal equilibrium calculations performed with RELAP4/MOD5, but a few such calculations were done with RELAP4/MOD6 and RELAP4/MOD7, which contain improved slip models. In order to estimate the importance of nonequilibrium effects, additional analyses were performed with TRAC-PD2, RELAP5 and the nonequilibrium option of RELAP4/MOD7. The purpose of these studies was to determine whether results from Westinghouse's calculation of the Creare experiments, which weremore » done with a UHI-modified version of SATAN, were sufficient to guarantee SATAN would be conservative with respect to ECC bypass in full-scale plant analyses.« less
Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stosic, Z.; Preusche, G.
1996-08-01
In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph
2015-10-01
RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters that need tomore » be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN has started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assessment capability to RELAP-7, currently under development at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activaties are currently ongoing for coupling RAVEN with software such as RELAP5-3D, etc. The aim of this document is the explanation of the input requirements, focalizing on the input structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph
2016-02-01
RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncertainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters that need tomore » be perturbed are accessible by input files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assessment capability to RELAP-7, currently under development at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activates are currently ongoing for coupling RAVEN with software such as RELAP5-3D, etc. The aim of this document is the explanation of the input requirements, focusing on the input structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua Joseph
2017-03-01
RAVEN is a generic software framework to perform parametric and probabilistic analy- sis based on the response of complex system codes. The initial development was aimed to provide dynamic risk analysis capabilities to the Thermo-Hydraulic code RELAP-7, currently under development at the Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose probabilistic and uncer- tainty quantification platform, capable to agnostically communicate with any system code. This agnosticism includes providing Application Programming Interfaces (APIs). These APIs are used to allow RAVEN to interact with any code as long as all the parameters thatmore » need to be perturbed are accessible by inputs files or via python interfaces. RAVEN is capable of investigating the system response, and investigating the input space using Monte Carlo, Grid, or Latin Hyper Cube sampling schemes, but its strength is focused to- ward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The development of RAVEN has started in 2012, when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework became stronger. RAVEN principal assignment is to provide the necessary software and algorithms in order to employ the concept developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just the individuation of the frequency of an event potentially leading to a system failure, but the closeness (or not) to key safety-related events. Hence, the approach is in- terested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. for an important process such as peak pressure in a pipe) is exceeded under certain conditions. The initial development of RAVEN has been focused on providing dynamic risk assess- ment capability to RELAP-7, currently under develop-ment at the INL and, likely, future replacement of the RELAP5-3D code. Most the capabilities that have been implemented having RELAP-7 as principal focus are easily deployable for other system codes. For this reason, several side activates are currently ongoing for coupling RAVEN with soft- ware such as RELAP5-3D, etc. The aim of this document is the explaination of the input requirements, focalizing on the input structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Prescott, Steven R; Smith, Curtis L
2011-07-01
In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less
Molecular Tagging Velocimetry Development for In-situ Measurement in High-Temperature Test Facility
NASA Technical Reports Server (NTRS)
Andre, Matthieu A.; Bardet, Philippe M.; Burns, Ross A.; Danehy, Paul M.
2015-01-01
The High Temperature Test Facility, HTTF, at Oregon State University (OSU) is an integral-effect test facility designed to model the behavior of a Very High Temperature Gas Reactor (VHTR) during a Depressurized Conduction Cooldown (DCC) event. It also has the ability to conduct limited investigations into the progression of a Pressurized Conduction Cooldown (PCC) event in addition to phenomena occurring during normal operations. Both of these phenomena will be studied with in-situ velocity field measurements. Experimental measurements of velocity are critical to provide proper boundary conditions to validate CFD codes, as well as developing correlations for system level codes, such as RELAP5 (http://www4vip.inl.gov/relap5/). Such data will be the first acquired in the HTTF and will introduce a diagnostic with numerous other applications to the field of nuclear thermal hydraulics. A laser-based optical diagnostic under development at The George Washington University (GWU) is presented; the technique is demonstrated with velocity data obtained in ambient temperature air, and adaptation to high-pressure, high-temperature flow is discussed.
Analysis of flow reversal test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, L.Y.; Tichler, P.R.
A series of tests has been conducted to measure the dryout power associated with a flow transient whereby the coolant in a heated channel undergoes a change in flow direction. An analysis of the test was made with the aid of a system code, RELAP5. A dryout criterion was developed in terms of a time-averaged void fraction calculated by RELAP5 for the heated channel. The dryout criterion was also compared with several CHF correlations developed for the channel geometry.
RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2012-06-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less
RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, G.; Epiney, A. S.
2012-07-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
A Comprehensive Validation Approach Using The RAVEN Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J
2015-06-01
The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Page, R.; Jones, J.R.
1997-07-01
Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation toolsmore » is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.« less
I-NERI Quarterly Technical Report (April 1 to June 30, 2005)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Oh; Prof. Hee Cheon NO; Prof. John Lee
2005-06-01
The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and reaction rates for the C/CO2 reaction were measured. Then, the rates of C/CO2 reaction were compared to the ones of C/O2 reaction. The rate equation for C/CO2 has been developed. (D) INL added models to RELAP5/ATHENA to cacilate the chemical reactions in a VHTR during an air ingress accident. Limited testing of the models indicate that they are calculating a correct special distribution in gas compositions. (E) INL benchmarked NACOK natural circulation data. (F) Professor Lee et al at the University of Michigan (UM) Task 5. The funding was received from the DOE Richland Office at the end of May and the subcontract paperwork was delivered to the UM on the sixth of June. The objective of this task is to develop a state of the art neutronics model for determining power distributions and decay heat deposition rates in a VHTGR core. Our effort during the reporting period covered reactor physics analysis of coated particles and coupled nuclear-thermal-hydraulic (TH) calculations, together with initial calculations for decay heat deposition rates in the core.« less
SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Haihua; Zhang, Hongbin; Zou, Ling
All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandiamore » National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The cases with two-phase flow at the turbine inlet will be pursued in future work.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis; Mandelli, Diego; Prescott, Steven
The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates. In order to evaluate the impact of these factors on the safety of the plant, the Risk Informed Safety Margin Characterization (RISMC) project aims to provide insight to decision makers through a series of simulations of the plant dynamics for different initial conditions (e.g., probabilistic analysis and uncertainty quantification). This report focuses, in particular, on the application of a RISMC detailed demonstration case study for an emergent issue using the RAVEN and RELAP-7 tools.more » This case study looks at the impact of a couple of challenges to a hypothetical pressurized water reactor, including: (1) a power uprate, (2) a potential loss of off-site power followed by the possible loss of all diesel generators (i.e., a station black-out event), (3) and earthquake induces station-blackout, and (4) a potential earthquake induced tsunami flood. The analysis is performed by using a set of codes: a thermal-hydraulic code (RELAP-7), a flooding simulation tool (NEUTRINO) and a stochastic analysis tool (RAVEN) – these are currently under development at the Idaho National Laboratory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Gonzalez, R.; Petruzzi, A.; D'Auria, F.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and (e.g., oblique Control Rods, Positive Void coefficient) required a developed and validated complex three dimensional (3D) neutron kinetics (NK) coupled thermal hydraulic (TH) model. Reactor shut-down is obtained by oblique CRs and, during accidental conditions, by an emergency shut-down system (JDJ) injecting a highly concentrated boron solution (boron clouds) in the moderator tank, the boron clouds reconstruction is obtained using a CFD (CFX) code calculation. A complete LBLOCA calculation implies the application of the RELAP5-3D{sup C} system code. Within the framework of themore » third Agreement 'NA-SA - Univ. of Pisa' a new RELAP5-3D control system for the boron injection system was developed and implemented in the validated coupled RELAP5-3D/NESTLE model of the Atucha 2 NPP. The aim of this activity is to find out the limiting case (maximum break area size) for the Peak Cladding Temperature for LOCAs under fixed boundary conditions. (authors)« less
Multiloop integral system test (MIST): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J.R.
1991-04-01
The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in 11 volumes. Volumes 2 through 8 pertain to groups of Phase 3 tests by type; Volume 9 presents inter-group comparisons; Volume 10 provides comparisons between the RELAP5/MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include, Test Advisory Group (TAG) issues, facility scaling and design, test matrix, observations, comparison of RELAP5 calculations to MIST observations, and MIST versus the TAG issues. MIST generated consistent integral-system data covering a wide range of transient interactions. MIST provided insight into integral system behavior and assisted the code effort. The MIST observations addressed each of the TAG issues. 11 refs., 29 figs., 9 tabs.« less
Study of steam condensation at sub-atmospheric pressure: setting a basic research using MELCOR code
NASA Astrophysics Data System (ADS)
Manfredini, A.; Mazzini, M.
2017-11-01
One of the most serious accidents that can occur in the experimental nuclear fusion reactor ITER is the break of one of the headers of the refrigeration system of the first wall of the Tokamak. This results in water-steam mixture discharge in vacuum vessel (VV), with consequent pressurization of this container. To prevent the pressure in the VV exceeds 150 KPa absolute, a system discharges the steam inside a suppression pool, at an absolute pressure of 4.2 kPa. The computer codes used to analyze such incident (eg. RELAP 5 or MELCOR) are not validated experimentally for such conditions. Therefore, we planned a basic research, in order to have experimental data useful to validate the heat transfer correlations used in these codes. After a thorough literature search on this topic, ACTA, in collaboration with the staff of ITER, defined the experimental matrix and performed the design of the experimental apparatus. For the thermal-hydraulic design of the experiments, we executed a series of calculations by MELCOR. This code, however, was used in an unconventional mode, with the development of models suited respectively to low and high steam flow-rate tests. The article concludes with a discussion of the placement of experimental data within the map featuring the phenomenon characteristics, showing the importance of the new knowledge acquired, particularly in the case of chugging.
An Update on Improvements to NiCE Support for RELAP-7
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alex; Wojtowicz, Anna; Deyton, Jordan H.
The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a framework that facilitates the development of applications that rely on finite-element analysis to solve a coupled, nonlinear system of partial differential equations. RELAP-7 represents an update to the venerable RELAP-5 simulator that is built upon this framework and attempts to model the balance-of-plant concerns in a full nuclear plant. This report details the continued support and integration of RELAP-7 and the NEAMS Integrated Computational Environment (NiCE). RELAP-7 is fully supported by the NiCE due to on-going work to tightly integrate NiCE with the MOOSE framework, and subsequently the applications built upon it.more » NiCE development throughout the first quarter of FY15 has focused on improvements, bug fixes, and feature additions to existing MOOSE-based application support. Specifically, this report will focus on improvements to the NiCE MOOSE Model Builder, the MOOSE application job launcher, and the 3D Nuclear Plant Viewer. This report also includes a comprehensive tutorial that guides RELAP-7 users through the basic NiCE workflow: from input generation and 3D Plant modeling, to massively parallel job launch and post-simulation data visualization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. George L. Mesina; Steven P. Miller
The XMGR5 graphing package [1] for drawing RELAP5 [2] plots is being re-written in Java [3]. Java is a robust programming language that is available at no cost for most computer platforms from Sun Microsystems, Inc. XMGR5 is an extension of an XY plotting tool called ACE/gr extended to plot data from several US Nuclear Regulatory Commission (NRC) applications. It is also the most popular graphing package worldwide for making RELAP5 plots. In Section 1, a short review of XMGR5 is given, followed by a brief overview of Java. In Section 2, shortcomings of both tkXMGR [4] and XMGR5 aremore » discussed and the value of converting to Java is given. Details of the conversion to Java are given in Section 3. The progress to date, some conclusions and future work are given in Section 4. Some screen shots of the Java version are shown.« less
Import Manipulate Plot RELAP5/MOD3 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, K. R.
1999-10-05
XMGR5 was derived from an XY plotting tool called ACE/gr, which is copyrighted by Paul J. Turner and in the public domain. The interactive version of ACE/GR is xmgr, and includes a graphical interface to the X-windows system. Enhancements to xmgr have been developed which import, manipualate, and plot data from RELAP/MOD3, MELCOR, FRAPCON, and SINDA codes, and NRC databank files. capabilities, include two-phase property table lookup functions, an equation interpreter, arithmetic library functions, and units conversion. Plot titles, labels, legends, and narrative can be displayed using Latin or Cyrillic alphabets.
Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.
2006-07-01
The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less
[Mechanisms of myeloid cell RelA/p65 in cigarette smoking-induced lung cancer growth in mice].
Yao, Yiwen; Wu, Junlu; Quan, Wenqiang; Zhou, Hong; Zhang, Yu; Wan, Haiying; Li, Dong
2014-06-01
The aim of this study was to investigate the mechanism of cigarette smoking (CS)-induced lung cancer growth in mice. RelA/p65⁻/⁻ mice and WT mice were used to establish mouse models of lung cancer. Both mice were divided into two groups: air group and CS group, respectively. Tumor number on the lung surface was counted and maximal tumor size was evaluated using HE staining. Kaplan Meier (K-M) survival curve was used to analyze the survival rate of the mice. Expression of Ki-67, TNF-α and CD68 in the tumor tissue was determined by immunohistochemical analysis, and cyclin D1 and c-myc proteins were examined by Western blot. Apoptosis of tumor cells was analyzed using TUNEL staining. The concentrations of inflammatory cytokines TNF-α, IL-6 and KC in the mouse lung tissues were evaluated by ELISA. Compared with the WT air group, the lung weight, lung tumor multiplicity, as well as maximum tumor size in the WT mice exposed to CS were (1.5 ± 0.1)g, (64.8 ± 4.1) and (7.6 ± 0.2) mm, respectively, significantly increased than those in the WT mice not exposed to CS (P < 0.05 for all). However, there were no statistically significant differences between RelA/p65⁻/⁻ mice before and after CS exposure (P > 0.05 for all). Kaplan-Meier survival analysis showed that CS exposure significantly shortened the life time of WT mice (P < 0.05), and deletion of RelA/p65 in myeloid cells resulted in an increased survival compared with that of the WT mice (P < 0.05 for all). The ratios of Ki-67 positive tumor cells were (43.4 ± 2.9)%, (60.6 ± 5.4)%, (12.8 ± 3.6)% and (15.0 ± 4.2)% in the WT air group, WT CS groups, RelA/p65⁻/⁻ air groups and RelA/p65⁻/⁻ CS groups, respectively. After smoking, the number of Ki-67-positive cells was significantly increased in the WT mice (P < 0.05). However, there was no significant difference between the RelA/p65⁻/⁻ groups before and after smoking (P > 0.05). The apoptosis rate of WT air, WT CS, RelA/p65⁻/⁻ air and RelA/p65⁻/⁻ CS groups were (11.6 ± 1.7)%, (13.0 ± 2.0)%, (13.2 ± 2.0)% and (11.0 ± 1.4)%, respectively, with no significant difference among them (P > 0.05). Expression of cyclin D1 and c-myc was induced in response to CS exposure in lung tumor cells of WT mice. In contrast, their expressions were not significantly changed in the RelA/p65⁻/⁻ mice after smoke exposure. CS exposure was associated with an increased number of macrophages infiltrating in the tumor tissue, in both WT and RelA/p65⁻/⁻ mice (P < 0.05). The concentrations of IL-6, KC and TNF-α were significantly increased after CS exposure in the lungs of WT mice (P < 0.05). Cigarette smoking promotes the lung cancer growth in mice. Myeloid cell RelA/p65 mediates CS-induced tumor growth. TNFα regulated by RelA/p65 may be involved in the lung cancer development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
This document outlines the development of a high fidelity, best estimate nuclear power plant severe transient simulation capability that will complement or enhance the integral system codes historically used for licensing and analysis of severe accidents. As with other tools in the Risk Informed Safety Margin Characterization (RISMC) Toolkit, the ultimate user of Enhanced Severe Transient Analysis and Prevention (ESTAP) capability is the plant decision-maker; the deliverable to that customer is a modern, simulation-based safety analysis capability, applicable to a much broader class of safety issues than is traditional Light Water Reactor (LWR) licensing analysis. Currently, the RISMC pathway’s majormore » emphasis is placed on developing RELAP-7, a next-generation safety analysis code, and on showing how to use RELAP-7 to analyze margin from a modern point of view: that is, by characterizing margin in terms of the probabilistic spectra of the “loads” applied to systems, structures, and components (SSCs), and the “capacity” of those SSCs to resist those loads without failing. The first objective of the ESTAP task, and the focus of one task of this effort, is to augment RELAP-7 analyses with user-selected multi-dimensional, multi-phase models of specific plant components to simulate complex phenomena that may lead to, or exacerbate, severe transients and core damage. Such phenomena include: coolant crossflow between PWR assemblies during a severe reactivity transient, stratified single or two-phase coolant flow in primary coolant piping, inhomogeneous mixing of emergency coolant water or boric acid with hot primary coolant, and water hammer. These are well-documented phenomena associated with plant transients but that are generally not captured in system codes. They are, however, generally limited to specific components, structures, and operating conditions. The second ESTAP task is to similarly augment a severe (post-core damage) accident integral analyses code with high fidelity simulations that would allow investigation of multi-dimensional, multi-phase containment phenomena that are only treated approximately in established codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J.R.
1991-08-01
The multiloop integral system test (MIST) was part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock Wilcox-designed plants. MIST was sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock Wilcox. The unique features of the Babcock Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral system facilities to addresss the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility -- the once-through integralmore » system (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in eleven volumes; Volumes 2 through 8 pertain to groups of Phase 3 tests by type, Volume 9 presents inter-group comparisons. Volume 10 provides comparisons between the RELAP5 MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later, Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include: test advisory grop (TAG) issues; facility scaling and design; test matrix; observations; comparisons of RELAP5 calculations to MIST observations; and MIST versus the TAG issues. 11 refs., 29 figs., 9 tabs.« less
INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Javier Ortensi; Sonat Sen
2013-09-01
The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less
Kim, Jung-Woong; Jang, Sang-Min; Kim, Chul-Hong; An, Joo-Hee; Kang, Eun-Jin; Choi, Kyung-Hee
2012-01-01
The nuclear factor-κB (NF-κB) family is involved in the expressions of numerous genes, in development, apoptosis, inflammatory responses, and oncogenesis. In this study we identified four NF-κB target genes that are modulated by TIP60. We also found that TIP60 interacts with the NF-κB RelA/p65 subunit and increases its transcriptional activity through protein-protein interaction. Although TIP60 binds with RelA/p65 using its histone acetyltransferase domain, TIP60 does not directly acetylate RelA/p65. However, TIP60 maintained acetylated Lys-310 RelA/p65 levels in the TNF-α-dependent NF-κB signaling pathway. In chromatin immunoprecipitation assay, TIP60 was primarily recruited to the IL-6, IL-8, C-IAP1, and XIAP promoters in TNF-α stimulation followed by acetylation of histones H3 and H4. Chromatin remodeling by TIP60 involved the sequential recruitment of acetyl-Lys-310 RelA/p65 to its target gene promoters. Furthermore, we showed that up-regulated TIP60 expression was correlated with acetyl-Lys-310 RelA/p65 expressions in hepatocarcinoma tissues. Taken together these results suggest that TIP60 is involved in the NF-κB pathway through protein interaction with RelA/p65 and that it modulates the transcriptional activity of RelA/p65 in NF-κB-dependent gene expression. PMID:22249179
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rebollo, L.
1992-04-01
Several beyond-design bases cold leg small-break LOCA postulated scenarios based on the lessons learned'' in the OECD-LOFT LP-SB-3 experiment have been analyzed for the Westinghouse single loop Jose Cabrera Nuclear Power Plant belonging to the Spanish utility UNION ELECTRICA FENOSA, S.A. The analysis has been done by the utility in the Thermal-Hydraulic Accident Analysis Section of the Engineering Department of the Nuclear Division. The RELAP5/MOD2/36.04 code has been used on a CYBER 180/830 computer and the simulation includes the 6 in. RHRS charging line, the 2 in. pressurizer spray, and the 1.5 in. CVCS make-up line piping breaks. The assumptionmore » of a total black-out condition'' coincident with the occurrence of the event has been made in order to consider a plant degraded condition with total active failure of the ECCS. As a result of the analysis, estimates of the time to core overheating startup'' as well as an evaluation of alternate operator measures to mitigate the consequences of the event have been obtained. Finally a proposal for improving the LOCA emergency operating procedure (E-1) has been suggested.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rebollo, L.
1992-04-01
Several beyond-design bases cold leg small-break LOCA postulated scenarios based on the ``lessons learned`` in the OECD-LOFT LP-SB-3 experiment have been analyzed for the Westinghouse single loop Jose Cabrera Nuclear Power Plant belonging to the Spanish utility UNION ELECTRICA FENOSA, S.A. The analysis has been done by the utility in the Thermal-Hydraulic & Accident Analysis Section of the Engineering Department of the Nuclear Division. The RELAP5/MOD2/36.04 code has been used on a CYBER 180/830 computer and the simulation includes the 6 in. RHRS charging line, the 2 in. pressurizer spray, and the 1.5 in. CVCS make-up line piping breaks. Themore » assumption of a ``total black-out condition`` coincident with the occurrence of the event has been made in order to consider a plant degraded condition with total active failure of the ECCS. As a result of the analysis, estimates of the ``time to core overheating startup`` as well as an evaluation of alternate operator measures to mitigate the consequences of the event have been obtained. Finally a proposal for improving the LOCA emergency operating procedure (E-1) has been suggested.« less
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, A.; Garner, P.; Hanan, N.
Thermal-hydraulic simulations have been performed using computational fluid dynamics (CFD) for the highly-enriched uranium (HEU) design of the IVG.1M reactor at the Institute of Atomic Energy (IAE) at the National Nuclear Center (NNC) in the Republic of Kazakhstan. Steady-state simulations were performed for both types of fuel assembly (FA), i.e. the FA in rows 1 & 2 and the FA in row 3, as well as for single pins in those FA (600 mm and 800 mm pins). Both single pin calculations and bundle sectors have been simulated for the most conservative operating conditions corresponding to the 10 MW outputmore » power, which corresponds to a pin unit cell Reynolds number of only about 7500. Simulations were performed using the commercial code STAR-CCM+ for the actual twisted pin geometry as well as a straight-pin approximation. Various Reynolds-Averaged Navier-Stokes (RANS) turbulence models gave different results, and so some validation runs with a higher-fidelity Large Eddy Simulation (LES) code were performed given the lack of experimental data. These singled out the Realizable Two-Layer k-ε as the most accurate turbulence model for estimating surface temperature. Single-pin results for the twisted case, based on the average flow rate per pin and peak pin power, were conservative for peak clad surface temperature compared to the bundle results. Also the straight-pin calculations were conservative as compared to the twisted pin simulations, as expected, but the single-pin straight case was not always conservative with regard to the straight-pin bundle. This was due to the straight-pin temperature distribution being strongly influenced by the pin orientation, particularly near the outer boundary. The straight-pin case also predicted the peak temperature to be in a different location than the twisted-pin case. This is a limitation of the straight-pin approach. The peak temperature pin was in a different location from the peak power pin in every case simulated, and occurred at an inner pin just before the enrichment change. The 600 mm case demonstrated a peak clad surface temperature of 370.4 K, while the 800 mm case had a temperature of 391.6 K. These temperatures are well below the necessary temperatures for boiling to occur at the rated pressure. Fuel temperatures are also well below the melting point. Future bundle work will include simulations of the proposed low-enriched uranium (LEU) design. Two transient scenarios were also investigated for the single-pin geometries. Both were “model” problems that were focused on pure thermal-hydraulic behavior, and as such were simple power changes that did not incorporate neutron kinetics modeling. The first scenario was a high-power, ramp increase, while the second scenario was a low-power, step increase. A cylindrical RELAP model was also constructed to investigate its accuracy as compared to the higher-fidelity CFD. Comparisons between the two codes showed good agreement for peak temperatures in the fuel and at the cladding surface for both cases. In the step transient, temperatures at four axial levels were also computed. These showed greater but reasonable discrepancy, with RELAP outputting higher temperatures. These results provide some evidence that RELAP can be used with confidence in modeling transients for IVG.« less
NASA Astrophysics Data System (ADS)
Kaliatka, T.; Povilaitis, M.; Kaliatka, A.; Urbonavicius, E.
2012-10-01
Wendelstein nuclear fusion device W7-X is a stellarator type experimental device, developed by Max Planck Institute of plasma physics. Rupture of one of the 40 mm inner diameter coolant pipes providing water for the divertor targets during the "baking" regime of the facility operation is considered to be the most severe accident in terms of the plasma vessel pressurization. "Baking" regime is the regime of the facility operation during which plasma vessel structures are heated to the temperature acceptable for the plasma ignition in the vessel. This paper presents the model of W7-X cooling system (pumps, valves, pipes, hydro-accumulators, and heat exchangers), developed using thermal-hydraulic state-of-the-art RELAP5 Mod3.3 code, and model of plasma vessel, developed by employing the lumped-parameter code COCOSYS. Using both models the numerical simulation of processes in W7-X cooling system and plasma vessel has been performed. The results of simulation showed, that the automatic valve closure time 1 s is the most acceptable (no water hammer effect occurs) and selected area of the burst disk is sufficient to prevent pressure in the plasma vessel.
Importins α and β signaling mediates endothelial cell inflammation and barrier disruption.
Leonard, Antony; Rahman, Arshad; Fazal, Fabeha
2018-04-01
Nucleocytoplasmic shuttling via importins is central to the function of eukaryotic cells and an integral part of the processes that lead to many human diseases. In this study, we addressed the role of α and β importins in the mechanism of endothelial cell (EC) inflammation and permeability, important pathogenic features of many inflammatory diseases such as acute lung injury and atherosclerosis. RNAi-mediated knockdown of importin α4 or α3 each inhibited NF-κB activation, proinflammatory gene (ICAM-1, VCAM-1, and IL-6) expression, and thereby endothelial adhesivity towards HL-60 cells, upon thrombin challenge. The inhibitory effect of α4 and α3 knockdown was associated with impaired nuclear import and consequently, DNA binding of RelA/p65 subunit of NF-κB and occurred independently of IκBα degradation. Intriguingly, knockdown of importins α4 and α3 also inhibited thrombin-induced RelA/p65 phosphorylation at Ser 536 , showing a novel role of α importins in regulating transcriptional activity of RelA/p65. Similarly, knockdown of importin β1, but not β2, blocked thrombin-induced activation of RelA/p65 and its target genes. In parallel studies, TNFα-mediated inflammatory responses in EC were refractory to knockdown of importins α4, α3 or β1, indicating a stimulus-specific regulation of RelA/p65 and EC inflammation by these importins. Importantly, α4, α3, or β1 knockdown also protected against thrombin-induced EC barrier disruption by inhibiting the loss of VE-cadherin at adherens junctions and by regulating actin cytoskeletal rearrangement. These results identify α4, α3 and β1 as critical mediators of EC inflammation and permeability associated with intravascular coagulation. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego
2016-06-01
RAVEN is a software framework able to perform parametric and stochastic analysis based on the response of complex system codes. The initial development was aimed at providing dynamic risk analysis capabilities to the thermohydraulic code RELAP-7, currently under development at Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose stochastic and uncertainty quantification platform, capable of communicating with any system code. In fact, the provided Application Programming Interfaces (APIs) allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by input filesmore » or via python interfaces. RAVEN is capable of investigating system response and explore input space using various sampling schemes such as Monte Carlo, grid, or Latin hypercube. However, RAVEN strength lies in its system feature discovery capabilities such as: constructing limit surfaces, separating regions of the input space leading to system failure, and using dynamic supervised learning techniques. The development of RAVEN started in 2012 when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework arose. RAVEN’s principal assignment is to provide the necessary software and algorithms in order to employ the concepts developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just to identify the frequency of an event potentially leading to a system failure, but the proximity (or lack thereof) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. peak pressure in a pipe) is exceeded under certain conditions. Most of the capabilities, implemented having RELAP-7 as a principal focus, are easily deployable to other system codes. For this reason, several side activates have been employed (e.g. RELAP5-3D, any MOOSE-based App, etc.) or are currently ongoing for coupling RAVEN with several different software. The aim of this document is to provide a set of commented examples that can help the user to become familiar with the RAVEN code usage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Hu, Rui; Lisowski, Darius
2016-04-17
The Reactor Cavity Cooling System (RCCS) is an important passive safety system being incorporated into the overall safety strategy for high temperature advanced reactor concepts such as the High Temperature Gas- Cooled Reactors (HTGR). The Natural Convection Shutdown Heat Removal Test Facility (NSTF) at Argonne National Laboratory (Argonne) reflects a 1/2-scale model of the primary features of one conceptual air-cooled RCCS design. The project conducts ex-vessel, passive heat removal experiments in support of Department of Energy Office of Nuclear Energy’s Advanced Reactor Technology (ART) program, while also generating data for code validation purposes. While experiments are being conducted at themore » NSTF to evaluate the feasibility of the passive RCCS, parallel modeling and simulation efforts are ongoing to support the design, fabrication, and operation of these natural convection systems. Both system-level and high fidelity computational fluid dynamics (CFD) analyses were performed to gain a complete understanding of the complex flow and heat transfer phenomena in natural convection systems. This paper provides a summary of the RELAP5-3D NSTF model development efforts and provides comparisons between simulation results and experimental data from the NSTF. Overall, the simulation results compared favorably to the experimental data, however, further analyses need to be conducted to investigate any identified differences.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggles, A.E.; Morris, D.G.
The RELAP5/MOD2 code was used to predict the thermal-hydraulic behavior of the HFIR core during decay heat removal through boiling natural circulation. The low system pressure and low mass flux values associated with boiling natural circulation are far from conditions for which RELAP5 is well exercised. Therefore, some simple hand calculations are used herein to establish the physics of the results. The interpretation and validation effort is divided between the time average flow conditions and the time varying flow conditions. The time average flow conditions are evaluated using a lumped parameter model and heat balance. The Martinelli-Nelson correlations are usedmore » to model the two-phase pressure drop and void fraction vs flow quality relationship within the core region. Systems of parallel channels are susceptible to both density wave oscillations and pressure drop oscillations. Periodic variations in the mass flux and exit flow quality of individual core channels are predicted by RELAP5. These oscillations are consistent with those observed experimentally and are of the density wave type. The impact of the time varying flow properties on local wall superheat is bounded herein. The conditions necessary for Ledinegg flow excursions are identified. These conditions do not fall within the envelope of decay heat levels relevant to HFIR in boiling natural circulation. 14 refs., 5 figs., 1 tab.« less
Condensation model for the ESBWR passive condensers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Revankar, S. T.; Zhou, W.; Wolf, B.
2012-07-01
In the General Electric's Economic simplified boiling water reactor (GE-ESBWR) the passive containment cooling system (PCCS) plays a major role in containment pressure control in case of an loss of coolant accident. The PCCS condenser must be able to remove sufficient energy from the reactor containment to prevent containment from exceeding its design pressure following a design basis accident. There are three PCCS condensation modes depending on the containment pressurization due to coolant discharge; complete condensation, cyclic venting and flow through mode. The present work reviews the models and presents model predictive capability along with comparison with existing data frommore » separate effects test. The condensation models in thermal hydraulics code RELAP5 are also assessed to examine its application to various flow modes of condensation. The default model in the code predicts complete condensation well, and basically is Nusselt solution. The UCB model predicts through flow well. None of condensation model in RELAP5 predict complete condensation, cyclic venting, and through flow condensation consistently. New condensation correlations are given that accurately predict all three modes of PCCS condensation. (authors)« less
Development of fission-products transport model in severe-accident scenarios for Scdap/Relap5
NASA Astrophysics Data System (ADS)
Honaiser, Eduardo Henrique Rangel
The understanding and estimation of the release of fission products during a severe accident became one of the priorities of the nuclear community after 1980, with the events of the Three-mile Island unit 2 (TMI-2), in 1979, and Chernobyl accidents, in 1986. Since this time, theoretical developments and experiments have shown that the primary circuit systems of light water reactors (LWR) have the potential to attenuate the release of fission products, a fact that had been neglected before. An advanced tool, compatible with nuclear thermal-hydraulics integral codes, is developed to predict the retention and physical evolution of the fission products in the primary circuit of LWRs, without considering the chemistry effects. The tool embodies the state-of-the-art models for the involved phenomena as well as develops new models. The capabilities acquired after the implementation of this tool in the Scdap/Relap5 code can be used to increase the accuracy of probability safety assessment (PSA) level 2, enhance the reactor accident management procedures and design new emergency safety features.
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Haihua; Zou, Ling; Zhang, Hongbin
As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inletmore » velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input model was developed to test the Terry turbine RCIC system, which generates reasonable results. Both the INL RCIC model and the Sandia RCIC model produce results matching major rated parameters such as the rotational speed, pump torque, and the turbine shaft work for the normal operation condition. The Sandia model is more sensitive to the turbine outlet pressure than the INL model. The next step will be further refining the Terry turbine models by including two-phase flow cases so that off-design conditions can be simulated. The pump model could also be enhanced with the use of the homologous curves.« less
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pecchia, M.; D'Auria, F.; Mazzantini, O.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI formore » performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)« less
Analysis of the SL-1 Accident Using RELAPS5-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Francisco, A.D. and Tomlinson, E. T.
2007-11-08
On January 3, 1961, at the National Reactor Testing Station, in Idaho Falls, Idaho, the Stationary Low Power Reactor No. 1 (SL-1) experienced a major nuclear excursion, killing three people, and destroying the reactor core. The SL-1 reactor, a 3 MW{sub t} boiling water reactor, was shut down and undergoing routine maintenance work at the time. This paper presents an analysis of the SL-1 reactor excursion using the RELAP5-3D thermal-hydraulic and nuclear analysis code, with the intent of simulating the accident from the point of reactivity insertion to destruction and vaporization of the fuel. Results are presented, along with amore » discussion of sensitivity to some reactor and transient parameters (many of the details are only known with a high level of uncertainty).« less
MOOSE IPL Extensions (Control Logic)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Permann, Cody
In FY-2015, the development of MOOSE was driven by the needs of the NEAMS MOOSE-based applications, BISON, MARMOT, and RELAP-7. An emphasis was placed on the continued upkeep and improvement MOOSE in support of the product line integration goals. New unified documentation tools have been developed, several improvements to regression testing have been enforced and overall better software quality practices have been implemented. In addition the Multiapps and Transfers systems have seen significant refactoring and robustness improvements, as has the “Restart and Recover” system in support of Multiapp simulations. Finally, a completely new “Control Logic” system has been engineered tomore » replace the prototype system currently in use in the RELAP-7 code. The development of this system continues and is expected to handle existing needs as well as support future enhancements.« less
Mukherjee, Tapas; Taye, Nandaraj; Vijayaragavan, Bharath; Chattopadhyay, Samit; Gomes, James; Basak, Soumen
2017-01-01
The nuclear factor κB (NF-κB) transcription factors coordinate the inflammatory immune response during microbial infection. Pathogenic substances engage canonical NF-κB signaling through the heterodimer RelA:p50, which is subjected to rapid negative feedback by inhibitor of κBα (IκBα). The noncanonical NF-κB pathway is required for the differentiation of immune cells; however, crosstalk between both pathways can occur. Concomitantly activated noncanonical signaling generates p52 from the p100 precursor. The synthesis of p100 is induced by canonical signaling, leading to formation of the late-acting RelA:p52 heterodimer. This crosstalk prolongs inflammatory RelA activity in epithelial cells to ensure pathogen clearance. We found that the Toll-like receptor 4 (TLR4)–activated canonical NF-κB signaling pathway is insulated from lymphotoxin β receptor (LTβR)–induced noncanonical signaling in mouse macrophage cell lines. Combined computational and biochemical studies indicated that the extent of NF-κB–responsive expression of Nfkbia, which encodes IκBα, inversely correlated with crosstalk. The Nfkbia promoter showed enhanced responsiveness to NF-κB activation in macrophages compared to that in fibroblasts. We found that this hyperresponsive promoter engaged the RelA:p52 dimer generated during costimulation of macrophages through TLR4 and LTβR to trigger synthesis of IκBα at late time points, which prevented the late-acting RelA crosstalk response. Together, these data suggest that despite the presence of identical signaling networks in cells of diverse lineages, emergent crosstalk between signaling pathways is subject to cell type–specific regulation. We propose that the insulation of canonical and noncanonical NF-κB pathways limits the deleterious effects of macrophage-mediated inflammation. PMID:27923915
RAVEN: a GUI and an Artificial Intelligence Engine in a Dynamic PRA Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Rabiti; D. Mandelli; A. Alfonsi
Increases in computational power and pressure for more accurate simulations and estimations of accident scenario consequences are driving the need for Dynamic Probabilistic Risk Assessment (PRA) [1] of very complex models. While more sophisticated algorithms and computational power address the back end of this challenge, the front end is still handled by engineers that need to extract meaningful information from the large amount of data and build these complex models. Compounding this problem is the difficulty in knowledge transfer and retention, and the increasing speed of software development. The above-described issues would have negatively impacted deployment of the new highmore » fidelity plant simulator RELAP-7 (Reactor Excursion and Leak Analysis Program) at Idaho National Laboratory. Therefore, RAVEN that was initially focused to be the plant controller for RELAP-7 will help mitigate future RELAP-7 software engineering risks. In order to accomplish this task, Reactor Analysis and Virtual Control Environment (RAVEN) has been designed to provide an easy to use Graphical User Interface (GUI) for building plant models and to leverage artificial intelligence algorithms in order to reduce computational time, improve results, and help the user to identify the behavioral pattern of the Nuclear Power Plants (NPPs). In this paper we will present the GUI implementation and its current capability status. We will also introduce the support vector machine algorithms and show our evaluation of their potentiality in increasing the accuracy and reducing the computational costs of PRA analysis. In this evaluation we will refer to preliminary studies performed under the Risk Informed Safety Margins Characterization (RISMC) project of the Light Water Reactors Sustainability (LWRS) campaign [3]. RISMC simulation needs and algorithm testing are currently used as a guidance to prioritize RAVEN developments relevant to PRA.« less
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-08-24
This study presents a numerical investigation on using the Jacobian-free Newton–Krylov (JFNK) method to solve the two-phase flow four-equation drift flux model with realistic constitutive correlations (‘closure models’). The drift flux model is based on Isshi and his collaborators’ work. Additional constitutive correlations for vertical channel flow, such as two-phase flow pressure drop, flow regime map, wall boiling and interfacial heat transfer models, were taken from the RELAP5-3D Code Manual and included to complete the model. The staggered grid finite volume method and fully implicit backward Euler method was used for the spatial discretization and time integration schemes, respectively. Themore » Jacobian-free Newton–Krylov method shows no difficulty in solving the two-phase flow drift flux model with a discrete flow regime map. In addition to the Jacobian-free approach, the preconditioning matrix is obtained by using the default finite differencing method provided in the PETSc package, and consequently the labor-intensive implementation of complex analytical Jacobian matrix is avoided. Extensive and successful numerical verification and validation have been performed to prove the correct implementation of the models and methods. Code-to-code comparison with RELAP5-3D has further demonstrated the successful implementation of the drift flux model.« less
Problems with numerical techniques: Application to mid-loop operation transients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryce, W.M.; Lillington, J.N.
1997-07-01
There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds ofmore » problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.« less
NEAMS Update. Quarterly Report for October - December 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.
2012-02-16
The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, J.C.
This report discusses the comparisons of a RELAP5 posttest calculation of the recovery portion of the Semiscale Mod-2B test S-SG-1 to the test data. The posttest calculation was performed with the RELAP5/MOD2 cycle 36.02 code without updates. The recovery procedure that was calculated mainly consisted of secondary feed and steam using auxiliary feedwater injection and the atmospheric dump valve of the unaffected steam generator (the steam generator without the tube rupture). A second procedure was initiated after the trends of the secondary feed and steam procedure had been established, and this was to stop the safety injection that had beenmore » provided by two trains of both the charging and high pressure injection systems. The Semiscale Mod-2B configuration is a small scale (1/1705), nonnuclear, instrumented, model of a Westinghouse four-loop pressurized water reactor power plant. S-SG-1 was a single-tube, cold-side, steam generator tube rupture experiment. The comparison of the posttest calculation and data included comparing the general trends and the driving mechanisms of the responses, the phenomena, and the individual responses of the main parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultz, R.R.; Wagoner, S.R.
1983-01-01
As a part of the charter of the Severe Accident Sequence Analysis (SASA) Program, station blackout transients have been analyzed using a RELAP5 model of the Browns Ferry Unit 1 Plant. The task was conducted as a partial fulfillment of the needs of the US Nuclear Regulatory Commission in examining the Unresolved Safety Issue A-44: Station Blackout (1) the station blackout transients were examined (a) to define the equipment needed to maintain a well cooled core, (b) to determine when core uncovery would occur given equipment failure, and (c) to characterize the behavior of the vessel thermal-hydraulics during the stationmore » blackout transients (in part as the plant operator would see it). These items are discussed in the paper. Conclusions and observations specific to the station blackout are presented.« less
ITER Port Interspace Pressure Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, Juan J; Van Hove, Walter A
The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB)more » of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dionne, B.; Tzanos, C. P.
To support the safety analyses required for the conversion of the Belgian Reactor 2 (BR2) from highly-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, the simulation of a number of loss-of-flow tests, with or without loss of pressure, has been undertaken. These tests were performed at BR2 in 1963 and used instrumented fuel assemblies (FAs) with thermocouples (TC) imbedded in the cladding as well as probes to measure the FAs power on the basis of their coolant temperature rise. The availability of experimental data for these tests offers an opportunity to better establish the credibility of the RELAP5-3D model andmore » methodology used in the conversion analysis. In order to support the HEU to LEU conversion safety analyses of the BR2 reactor, RELAP simulations of a number of loss-of-flow/loss-of-pressure tests have been undertaken. Preliminary analyses showed that the conservative power distributions used historically in the BR2 RELAP model resulted in a significant overestimation of the peak cladding temperature during the transient. Therefore, it was concluded that better estimates of the steady-state and decay power distributions were needed to accurately predict the cladding temperatures measured during the tests and establish the credibility of the RELAP model and methodology. The new approach ('best estimate' methodology) uses the MCNP5, ORIGEN-2 and BERYL codes to obtain steady-state and decay power distributions for the BR2 core during the tests A/400/1, C/600/3 and F/400/1. This methodology can be easily extended to simulate any BR2 core configuration. Comparisons with measured peak cladding temperatures showed a much better agreement when power distributions obtained with the new methodology are used.« less
ISP33 standard problem on the PACTEL facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purhonen, H.; Kouhia, J.; Kalli, H.
ISP33 is the first OECD/NEA/CSNI standard problem related to VVER type of pressurized water reactors. The reference reactor of the PACTEL test facility, which was used to carry out the ISP33 experiment, is the VVER-440 reactor, two of which are located near the Finnish city of Loviisa. The objective of the ISP33 test was to study the natural circulation behaviour of VVER-440 reactors at different coolant inventories. Natural circulation was considered as a suitable phenomenon to focus on by the first VVER related ISP due to its importance in most accidents and transients. The behaviour of the natural circulation wasmore » expected to be different compared to Western type of PWRs as a result of the effect of horizontal steam generators and the hot leg loop seals. This ISP was conducted as a blind problem. The experiment was started at full coolant inventory. Single-phase natural circulation transported the energy from the core to the steam generators. The inventory was then reduced stepwise at about 900 s intervals draining 60 kg each time from the bottom of the downcomer. the core power was about 3.7% of the nominal value. The test was terminated after the cladding temperatures began to rise. ATHLET, CATHARE, RELAP5 (MODs 3, 2.5 and 2), RELAP4/MOD6, DINAMIKA and TECH-M4 codes were used in 21 pre- and 20 posttest calculations submitted for the ISP33.« less
Thermal-hydraulic modeling needs for passive reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, J.M.
1997-07-01
The U.S. Nuclear Regulatory Commission has received an application for design certification from the Westinghouse Electric Corporation for an Advanced Light Water Reactor design known as the AP600. As part of the design certification process, the USNRC uses its thermal-hydraulic system analysis codes to independently audit the vendor calculations. The focus of this effort has been the small break LOCA transients that rely upon the passive safety features of the design to depressurize the primary system sufficiently so that gravity driven injection can provide a stable source for long term cooling. Of course, large break LOCAs have also been considered,more » but as the involved phenomena do not appear to be appreciably different from those of current plants, they were not discussed in this paper. Although the SBLOCA scenario does not appear to threaten core coolability - indeed, heatup is not even expected to occur - there have been concerns as to the performance of the passive safety systems. For example, the passive systems drive flows with small heads, consequently requiring more precision in the analysis compared to active systems methods for passive plants as compared to current plants with active systems. For the analysis of SBLOCAs and operating transients, the USNRC uses the RELAP5 thermal-hydraulic system analysis code. To assure the applicability of RELAP5 to the analysis of these transients for the AP600 design, a four year long program of code development and assessment has been undertaken.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Assessment and Application of the ROSE Code for Reactor Outage Thermal-Hydraulic and Safety Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Thomas K.S.; Ko, F.-K.; Dai, L.-C
The currently available tools, such as RELAP5, RETRAN, and others, cannot easily and correctly perform the task of analyzing the system behavior during plant outages. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as midloop operation (MLO) with loss of residual heat removal (RHR), has been developed. Important thermal-hydraulic processes involved during MLO with loss of RHR can be properly simulated by the newly developed reactor outage simulation and evaluation (ROSE) code. The two-region approach with a modified two-fluid model has been adopted to be the theoretical basis of the ROSE code.To verify the analytical modelmore » in the first step, posttest calculations against the integral midloop experiments with loss of RHR have been performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility test data is demonstrated. To further mature the ROSE code in simulating a full-sized pressurized water reactor, assessment against the WGOTHIC code and the Maanshan momentary-loss-of-RHR event has been undertaken. The successfully assessed ROSE code is then applied to evaluate the abnormal operation procedure (AOP) with loss of RHR during MLO (AOP 537.4) for the Maanshan plant. The ROSE code also has been successfully transplanted into the Maanshan training simulator to support operator training. How the simulator was upgraded by the ROSE code for MLO will be presented in the future.« less
Status of thermalhydraulic modelling and assessment: Open issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.
1997-07-01
This paper presents the status of the physical modelling in present codes used for Nuclear Reactor Thermalhydraulics (TRAC, RELAP 5, CATHARE, ATHLET,...) and attempts to list the unresolved or partially resolved issues. First, the capabilities and limitations of present codes are presented. They are mainly known from a synthesis of the assessment calculations performed for both separate effect tests and integral effect tests. It is also interesting to list all the assumptions and simplifications which were made in the establishment of the system of equations and of the constitutive relations. Many of the present limitations are associated to physical situationsmore » where these assumptions are not valid. Then, recommendations are proposed to extend the capabilities of these codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleicher, Frederick; Ortensi, Javier; DeHart, Mark
Accurate calculation of desired quantities to predict fuel behavior requires the solution of interlinked equations representing different physics. Traditional fuels performance codes often rely on internal empirical models for the pin power density and a simplified boundary condition on the cladding edge. These simplifications are performed because of the difficulty of coupling applications or codes on differing domains and mapping the required data. To demonstrate an approach closer to first principles, the neutronics application Rattlesnake and the thermal hydraulics application RELAP-7 were coupled to the fuels performance application BISON under the master application MAMMOTH. A single fuel pin was modeledmore » based on the dimensions of a Westinghouse 17x17 fuel rod. The simulation consisted of a depletion period of 1343 days, roughly equal to three full operating cycles, followed by a station blackout (SBO) event. The fuel rod was depleted for 1343 days for a near constant total power loading of 65.81 kW. After 1343 days the fission power was reduced to zero (simulating a reactor shut-down). Decay heat calculations provided the time-varying energy source after this time. For this problem, Rattlesnake, BISON, and RELAP-7 are coupled under MAMMOTH in a split operator approach. Each system solves its physics on a separate mesh and, for RELAP-7 and BISON, on only a subset of the full problem domain. Rattlesnake solves the neutronics over the whole domain that includes the fuel, cladding, gaps, water, and top and bottom rod holders. Here BISON is applied to the fuel and cladding with a 2D axi-symmetric domain, and RELAP-7 is applied to the flow of the circular outer water channel with a set of 1D flow equations. The mesh on the Rattlesnake side can either be 3D (for low order transport) or 2D (for diffusion). BISON has a matching ring structure mesh for the fuel so both the power density and local burn up are copied accurately from Rattlesnake. At each depletion time step, Rattlesnake calculates a power density, fission density rate, burn-up distribution and fast flux based on the current water density and fuel temperature. These are then mapped to the BISON mesh for a fuels performance solve. BISON calculates the fuel temperature and cladding surface temperature based upon the current power density and bulk fluid temperature. RELAP-7 then calculates the fluid temperature, water density fraction and water phase velocity based upon the cladding surface temperature. The fuel temperature and the fluid density are then passed back to Rattlesnake for another neutronics calculation. Six Picard or fixed-point style iterations are preformed in this manner to obtain consistent tightly coupled and stable results. For this paper a set of results from the detailed calculation are provided for both during depletion and the SBO event. We demonstrate that a detailed calculation closer to first principles can be done under MAMMOTH between different applications on differing domains.« less
Thermal-hydraulic analysis of N Reactor graphite and shield cooling system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, J.O.; Schmitt, B.E.
1988-02-01
A series of bounding (worst-case) calculations were performed using a detailed hydrodynamic RELAP5 model of the N Reactor graphite and shield cooling system (GSCS). These calculations were specifically aimed to answer issues raised by the Westinghouse Independent Safety Review (WISR) committee. These questions address the operability of the GSCS during a worst-case degraded-core accident that requires the GDCS to mitigate the consequences of the accident. An accident scenario previously developed was designed as the hydrogen-mitigation design-basis accident (HMDBA). Previous HMDBA heat transfer analysis,, using the TRUMP-BD code, was used to define the thermal boundary conditions that the GSDS may bemore » exposed to. These TRUMP/HMDBA analysis results were used to define the bounding operating conditions of the GSCS during the course of an HMDBA transient. Nominal and degraded GSCS scenarios were investigated using RELAP5 within or at the bounds of the HMDBA transient. 10 refs., 42 figs., 10 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Alfonsi; C. Rabiti; D. Mandelli
The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less
Design of an Experimental Facility for Passive Heat Removal in Advanced Nuclear Reactors
NASA Astrophysics Data System (ADS)
Bersano, Andrea
With reference to innovative heat exchangers to be used in passive safety system of Gen- eration IV nuclear reactors and Small Modular Reactors it is necessary to study the natural circulation and the efficiency of heat removal systems. Especially in safety systems, as the decay heat removal system of many reactors, it is increasing the use of passive components in order to improve their availability and reliability during possible accidental scenarios, reducing the need of human intervention. Many of these systems are based on natural circulation, so they require an intense analysis due to the possible instability of the related phenomena. The aim of this thesis work is to build a scaled facility which can reproduce, in a simplified way, the decay heat removal system (DHR2) of the lead-cooled fast reactor ALFRED and, in particular, the bayonet heat exchanger, which transfers heat from lead to water. Given the thermal power to be removed, the natural circulation flow rate and the pressure drops will be studied both experimentally and numerically using the code RELAP5 3D. The first phase of preliminary analysis and project includes: the calculations to design the heat source and heat sink, the choice of materials and components and CAD drawings of the facility. After that, the numerical study is performed using the thermal-hydraulic code RELAP5 3D in order to simulate the behavior of the system. The purpose is to run pretest simulations of the facility to optimize the dimensioning setting the operative parameters (temperature, pressure, etc.) and to chose the most adequate measurement devices. The model of the system is continually developed to better simulate the system studied. High attention is dedicated to the control logic of the system to obtain acceptable results. The initial experimental tests phase consists in cold zero power tests of the facility in order to characterize and to calibrate the pressure drops. In future works the experimental results will be compared to the values predicted by the system code and differences will be discussed with the ultimate goal to qualify RELAP5-3D for the analysis of decay heat removal systems in natural circulation. The numerical data will be also used to understand the key parameters related to the heat transfer in natural circulation and to optimize the operation of the system.
Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baratta, A.J.
1997-07-01
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts andmore » engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.« less
Chapman, Neil R; Webster, Gill A; Gillespie, Peter J; Wilson, Brian J; Crouch, Dorothy H; Perkins, Neil D
2002-01-01
Members of both Myc and nuclear factor kappaB (NF-kappaB) families of transcription factors are found overexpressed or inappropriately activated in many forms of human cancer. Furthermore, NF-kappaB can induce c-Myc gene expression, suggesting that the activities of these factors are functionally linked. We have discovered that both c-Myc and v-Myc can induce a previously undescribed, truncated form of the RelA(p65) NF-kappaB subunit, RelA(p37). RelA(p37) encodes the N-terminal DNA binding and dimerization domain of RelA(p65) and would be expected to function as a trans-dominant negative inhibitor of NF-kappaB. Surprisingly, we found that RelA(p37) no longer binds to kappaB elements. This result is explained, however, by the observation that RelA(p37), but not RelA(p65), forms a high-molecular-mass complex with c-Myc. These results demonstrate a previously unknown functional and physical interaction between RelA and c-Myc with many significant implications for our understanding of the role that both proteins play in the molecular events underlying tumourigenesis. PMID:12027803
Current and anticipated uses of thermal hydraulic codes in Korea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyung-Doo; Chang, Won-Pyo
1997-07-01
In Korea, the current uses of thermal hydraulic codes are categorized into 3 areas. The first application is in designing both nuclear fuel and NSSS. The codes have usually been introduced based on the technology transfer programs agreed between KAERI and the foreign vendors. Another area is in the supporting of the plant operations and licensing by the utility. The third category is research purposes. In this area assessments and some applications to the safety issue resolutions are major activities using the best estimate thermal hydraulic codes such as RELAP5/MOD3 and CATHARE2. Recently KEPCO plans to couple thermal hydraulic codesmore » with a neutronics code for the design of the evolutionary type reactor by 2004. KAERI also plans to develop its own best estimate thermal hydraulic code, however, application range is different from KEPCO developing code. Considering these activities, it is anticipated that use of the best estimate hydraulic analysis code developed in Korea may be possible in the area of safety evaluation within 10 years.« less
NASA Astrophysics Data System (ADS)
Bertani, C.; Falcone, N.; Bersano, A.; Caramello, M.; Matsushita, T.; De Salve, M.; Panella, B.
2017-11-01
High safety and reliability of advanced nuclear reactors, Generation IV and Small Modular Reactors (SMR), have a crucial role in the acceptance of these new plants design. Among all the possible safety systems, particular efforts are dedicated to the study of passive systems because they rely on simple physical principles like natural circulation, without the need of external energy source to operate. Taking inspiration from the second Decay Heat Removal system (DHR2) of ALFRED, the European Generation IV demonstrator of the fast lead cooled reactor, an experimental facility has been built at the Energy Department of Politecnico di Torino (PROPHET facility) to study single and two-phase flow natural circulation. The facility behavior is simulated using the thermal-hydraulic system code RELAP5-3D, which is widely used in nuclear applications. In this paper, the effect of the initial water inventory on natural circulation is analyzed. The experimental time behaviors of temperatures and pressures are analyzed. The experimental matrix ranges between 69 % and 93%; the influence of the opposite effects related to the increase of the volume available for the expansion and the pressure raise due to phase change is discussed. Simulations of the experimental tests are carried out by using a 1D model at constant heat power and fixed liquid and air mass; the code predictions are compared with experimental results. Two typical responses are observed: subcooled or two phase saturated circulation. The steady state pressure is a strong function of liquid and air mass inventory. The numerical results show that, at low initial liquid mass inventory, the natural circulation is not stable but pulsated.
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...
2016-09-23
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Implicit time-integration method for simultaneous solution of a coupled non-linear system
NASA Astrophysics Data System (ADS)
Watson, Justin Kyle
Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Yong Joon; Yoo, Jun Soo; Smith, Curtis Lee
2015-09-01
This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.
BISON Modeling of Reactivity-Initiated Accident Experiments in a Static Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folsom, Charles P.; Jensen, Colby B.; Williamson, Richard L.
2016-09-01
In conjunction with the restart of the TREAT reactor and the design of test vehicles, modeling and simulation efforts are being used to model the response of Accident Tolerant Fuel (ATF) concepts under reactivity insertion accident (RIA) conditions. The purpose of this work is to model a baseline case of a 10 cm long UO2-Zircaloy fuel rodlet using BISON and RELAP5 over a range of energy depositions and with varying reactor power pulse widths. The results show the effect of varying the pulse width and energy deposition on both thermal and mechanical parameters that are important for predicting failure ofmore » the fuel rodlet. The combined BISON/RELAP5 model captures coupled thermal and mechanical effects on the fuel-to-cladding gap conductance, cladding-to-coolant heat transfer coefficient and water temperature and pressure that would not be capable in each code individually. These combined effects allow for a more accurate modeling of the thermal and mechanical response in the fuel rodlet and thermal-hydraulics of the test vehicle.« less
BWR station blackout: A RISMC analysis using RAVEN and RELAP5-3D
Mandelli, D.; Smith, C.; Riley, T.; ...
2016-01-01
The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates and improved operations. In order to evaluate the impact of these factors on the safety of the plant, the Risk-Informed Safety Margin Characterization (RISMC) project aims to provide insights to decision makers through a series of simulations of the plant dynamics for different initial conditions and accident scenarios. This paper presents a case study in order to show the capabilities of the RISMC methodology to assess impact of power uprate of a Boiling Watermore » Reactor system during a Station Black-Out accident scenario. We employ a system simulator code, RELAP5-3D, coupled with RAVEN which perform the stochastic analysis. Furthermore, our analysis is performed by: 1) sampling values from a set of parameters from the uncertainty space of interest, 2) simulating the system behavior for that specific set of parameter values and 3) analyzing the outcomes from the set of simulation runs.« less
Supplemental Thermal-Hydraulic Transient Analyses of BR2 in Support of Conversion to LEU Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Licht, J.; Dionne, B.; Sikik, E.
2016-01-01
Belgian Reactor 2 (BR2) is a research and test reactor located in Mol, Belgium and is primarily used for radioisotope production and materials testing. The Materials Management and Minimization (M3) Reactor Conversion Program of the National Nuclear Security Administration (NNSA) is supporting the conversion of the BR2 reactor from Highly Enriched Uranium (HEU) fuel to Low Enriched Uranium (LEU) fuel. The RELAP5/Mod 3.3 code has been used to perform transient thermal-hydraulic safety analyses of the BR2 reactor to support reactor conversion. A RELAP5 model of BR2 has been validated against select transient BR2 reactor experiments performed in 1963 by showingmore » agreement with measured cladding temperatures. Following the validation, the RELAP5 model was then updated to represent the current use of the reactor; taking into account core configuration, neutronic parameters, trip settings, component changes, etc. Simulations of the 1963 experiments were repeated with this updated model to re-evaluate the boiling risks associated with the currently allowed maximum heat flux limit of 470 W/cm 2 and temporary heat flux limit of 600 W/cm 2. This document provides analysis of additional transient simulations that are required as part of a modern BR2 safety analysis report (SAR). The additional simulations included in this report are effect of pool temperature, reduced steady-state flow rate, in-pool loss of coolant accidents, and loss of external cooling. The simulations described in this document have been performed for both an HEU- and LEU-fueled core.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Morlang, G.M.
1996-06-01
The use of neutron radiography for visualization of fluid flow through flow visualization modules has been very successful. Current experiments at the Penn State Breazeale Reactor serve to verify the mixing and transport of soluble boron under natural flow conditions as would be experienced in a pressurized water reactor. Different flow geometries have been modeled including holes, slots, and baffles. Flow modules are constructed of aluminum box material 1 1/2 inches by 4 inches in varying lengths. An experimental flow system was built which pumps fluid to a head tank and natural circulation flow occurs from the head tank throughmore » the flow visualization module to be radiographed. The entire flow system is mounted on a portable assembly to allow placement of the flow visualization module in front of the neutron beam port. A neutron-transparent fluorinert fluid is used to simulate water at different densities. Boron is modeled by gadolinium oxide powder as a tracer element, which is placed in a mixing assembly and injected into the system by remote operated electric valve, once the reactor is at power. The entire sequence is recorded on real-time video. Still photographs are made frame-by-frame from the video tape. Computers are used to digitally enhance the video and still photographs. The data obtained from the enhancement will be used for verification of simple geometry predictions using the TRAC and RELAP thermal-hydraulic codes. A detailed model of a reactor vessel inlet plenum, downcomer region, flow distribution area and core inlet is being constructed to model the AP600 plenum. Successive radiography experiments of each section of the model under identical conditions will provide a complete vessel/core model for comparison with the thermal-hydraulic codes.« less
Domínguez-Acosta, O; Vega, L; Estrada-Muñiz, E; Rodríguez, M S; Gonzalez, F J; Elizondo, G
2018-06-21
Several studies have identified the aryl hydrocarbon receptor (AhR) as a negative regulator of the innate and adaptive immune responses. However, the molecular mechanisms by which this transcription factor exerts such modulatory effects are not well understood. Interaction between AhR and RelA/p65 has previously been reported. RelA/p65 is the major NFκB subunit that plays a critical role in immune responses to infection. The aim of the present study was to determine whether the activation of AhR disrupted RelA/p65 signaling in mouse peritoneal macrophages by decreasing its half-life. The data demonstrate that the activation of AhR by TCDD and β-naphthoflavone (β-NF) decreased protein levels of the pro-inflammatory cytokines TFN-α, IL-6 and IL-12 after macrophage activation with LPS/IFNγ. In an AhR-dependent manner, TCDD treatment induces RelA/p65 ubiquitination and proteosomal degradation, an effect dependent on AhR transcriptional activity. Activation of AhR also induced lysosome-like membrane structure formation in mouse peritoneal macrophages and RelA/p65 lysosome-dependent degradation. In conclusion, these results demonstrate that AhR activation promotes RelA/p65 protein degradation through the ubiquitin proteasome system, as well as through the lysosomes, resulting in decreased pro-inflammatory cytokine levels in mouse peritoneal macrophages. Copyright © 2018. Published by Elsevier Inc.
Summary of papers on current and anticipated uses of thermal-hydraulic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, R.
1997-07-01
The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especiallymore » faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).« less
Posttest REALP4 analysis of LOFT experiment L1-3A
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.R.; Holmstrom, H.L.O.
This report presents selected results of posttest RELAP4 modeling of LOFT loss-of-coolant experiment L1-3A, a double-ended isothermal cold leg break with lower plenum emergency core coolant injection. Comparisons are presented between the pretest prediction, the posttest analysis, and the experimental data. It is concluded that pressurizer modeling is important for accurately predicting system behavior during the initial portion of saturated blowdown. Using measured initial conditions rather than nominal specified initial conditions did not influence the system model results significantly. Using finer nodalization in the reactor vessel improved the prediction of the system pressure history by minimizing steam condensation effects. Unequalmore » steam condensation between the downcomer and core volumes appear to cause the manometer oscillations observed in both the pretest and posttest RELAP4 analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobromir Panayotov; Andrew Grief; Brad J. Merrill
'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less
Establishment and assessment of code scaling capability
NASA Astrophysics Data System (ADS)
Lim, Jaehyok
In this thesis, a method for using RELAP5/MOD3.3 (Patch03) code models is described to establish and assess the code scaling capability and to corroborate the scaling methodology that has been used in the design of the Purdue University Multi-Dimensional Integral Test Assembly for ESBWR applications (PUMA-E) facility. It was sponsored by the United States Nuclear Regulatory Commission (USNRC) under the program "PUMA ESBWR Tests". PUMA-E facility was built for the USNRC to obtain data on the performance of the passive safety systems of the General Electric (GE) Nuclear Energy Economic Simplified Boiling Water Reactor (ESBWR). Similarities between the prototype plant and the scaled-down test facility were investigated for a Gravity-Driven Cooling System (GDCS) Drain Line Break (GDLB). This thesis presents the results of the GDLB test, i.e., the GDLB test with one Isolation Condenser System (ICS) unit disabled. The test is a hypothetical multi-failure small break loss of coolant (SB LOCA) accident scenario in the ESBWR. The test results indicated that the blow-down phase, Automatic Depressurization System (ADS) actuation, and GDCS injection processes occurred as expected. The GDCS as an emergency core cooling system provided adequate supply of water to keep the Reactor Pressure Vessel (RPV) coolant level well above the Top of Active Fuel (TAF) during the entire GDLB transient. The long-term cooling phase, which is governed by the Passive Containment Cooling System (PCCS) condensation, kept the reactor containment system that is composed of Drywell (DW) and Wetwell (WW) below the design pressure of 414 kPa (60 psia). In addition, the ICS continued participating in heat removal during the long-term cooling phase. A general Code Scaling, Applicability, and Uncertainty (CSAU) evaluation approach was discussed in detail relative to safety analyses of Light Water Reactor (LWR). The major components of the CSAU methodology that were highlighted particularly focused on the scaling issues of experiments and models and their applicability to the nuclear power plant transient and accidents. The major thermal-hydraulic phenomena to be analyzed were identified and the predictive models adopted in RELAP5/MOD3.3 (Patch03) code were briefly reviewed.
Analysis of the Space Propulsion System Problem Using RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
diego mandelli; curtis smith; cristian rabiti
This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less
NASA Astrophysics Data System (ADS)
Takeda, Takeshi; Maruyama, Yu; Watanabe, Tadashi; Nakamura, Hideo
Experiments simulating PWR intermediate-break loss-of-coolant accidents (IBLOCAs) with 17% break at hot leg or cold leg were conducted in OECD/NEA ROSA-2 Project using the Large Scale Test Facility (LSTF). In the hot leg IBLOCA test, core uncovery started simultaneously with liquid level drop in crossover leg downflow-side before loop seal clearing (LSC) induced by steam condensation on accumulator coolant injected into cold leg. Water remained on upper core plate in upper plenum due to counter-current flow limiting (CCFL) because of significant upward steam flow from the core. In the cold leg IBLOCA test, core dryout took place due to rapid liquid level drop in the core before LSC. Liquid was accumulated in upper plenum, steam generator (SG) U-tube upflow-side and SG inlet plenum before the LSC due to CCFL by high velocity vapor flow, causing enhanced decrease in the core liquid level. The RELAP5/MOD3.2.1.2 post-test analyses of the two LSTF experiments were performed employing critical flow model in the code with a discharge coefficient of 1.0. In the hot leg IBLOCA case, cladding surface temperature of simulated fuel rods was underpredicted due to overprediction of core liquid level after the core uncovery. In the cold leg IBLOCA case, the cladding surface temperature was underpredicted too due to later core uncovery than in the experiment. These may suggest that the code has remaining problems in proper prediction of primary coolant distribution.
An approach to model reactor core nodalization for deterministic safety analysis
NASA Astrophysics Data System (ADS)
Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd
2016-01-01
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.
An approach to model reactor core nodalization for deterministic safety analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my
Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less
Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Smith, Curtis L.; Alfonsi, Andrea
2015-09-01
The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, themore » overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.« less
PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, Gerhard
2016-03-01
The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less
Fazal, Fabeha; Minhajuddin, Mohd; Bijli, Kaiser M; McGrath, James L; Rahman, Arshad
2007-02-09
Activation of the transcription factor NF-kappaB involves its release from the inhibitory protein IkappaBalpha in the cytoplasm and subsequently, its translocation to the nucleus. Whereas the events responsible for its release have been elucidated, mechanisms regulating the nuclear transport of NF-kappaB remain elusive. We now provide evidence for actin cytoskeleton-dependent and -independent mechanisms of RelA/p65 nuclear transport using the proinflammatory mediators, thrombin and tumor necrosis factor alpha, respectively. We demonstrate that thrombin alters the actin cytoskeleton in endothelial cells and interfering with these alterations, whether by stabilizing or destabilizing the actin filaments, prevents thrombin-induced NF-kappaB activation and consequently, expression of its target gene, ICAM-1. The blockade of NF-kappaB activation occurs downstream of IkappaBalpha degradation and is associated with impaired RelA/p65 nuclear translocation. Importantly, thrombin induces association of RelA/p65 with actin and this interaction is sensitive to stabilization/destabilization of the actin filaments. In parallel studies, stabilizing or destabilizing the actin filaments fails to inhibit RelA/p65 nuclear accumulation and ICAM-1 expression by tumor necrosis factor alpha, consistent with its inability to induce actin filament formation comparable with thrombin. Thus, these studies reveal the existence of actin cytoskeleton-dependent and -independent pathways that may be engaged in a stimulus-specific manner to facilitate RelA/p65 nuclear import and thereby ICAM-1 expression in endothelial cells.
VICTORIA: A mechanistic model for radionuclide behavior in the reactor coolant system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaperow, J.H.; Bixler, N.E.
1996-12-31
VICTORIA is the U.S. Nuclear Regulatory Commission`s (NRC`s) mechanistic, best-estimate code for analysis of fission product release from the core and subsequent transport in the reactor vessel and reactor coolant system. VICTORIA requires thermal-hydraulic data (i.e., temperatures, pressures, and velocities) as input. In the past, these data have been taken from the results of calculations from thermal-hydraulic codes such as SCDAP/RELAP5, MELCOR, and MAAP. Validation and assessment of VICTORIA 1.0 have been completed. An independent peer review of VICTORIA, directed by Brookhaven National Laboratory and supported by experts in the areas of fuel release, fission product chemistry, and aerosol physics,more » has been undertaken. This peer review, which will independently assess the code`s capabilities, is nearing completion with the peer review committee`s final report expected in Dec 1996. A limited amount of additional development is expected as a result of the peer review. Following this additional development, the NRC plans to release VICTORIA 1.1 and an updated and improved code manual. Future plans mainly involve use of the code for plant calculations to investigate specific safety issues as they arise. Also, the code will continue to be used in support of the Phebus experiments.« less
Targeting NF-κB RelA/p65 phosphorylation overcomes RITA resistance.
Bu, Yiwen; Cai, Guoshuai; Shen, Yi; Huang, Chenfei; Zeng, Xi; Cao, Yu; Cai, Chuan; Wang, Yuhong; Huang, Dan; Liao, Duan-Fang; Cao, Deliang
2016-12-28
Inactivation of p53 occurs frequently in various cancers. RITA is a promising anticancer small molecule that dissociates p53-MDM2 interaction, reactivates p53 and induces exclusive apoptosis in cancer cells, but acquired RITA resistance remains a major drawback. This study found that the site-differential phosphorylation of nuclear factor-κB (NF-κB) RelA/p65 creates a barcode for RITA chemosensitivity in cancer cells. In naïve MCF7 and HCT116 cells where RITA triggered vast apoptosis, phosphorylation of RelA/p65 increased at Ser536, but decreased at Ser276 and Ser468; oppositely, in RITA-resistant cells, RelA/p65 phosphorylation decreased at Ser536, but increased at Ser276 and Ser468. A phosphomimetic mutation at Ser536 (p65/S536D) or silencing of endogenous RelA/p65 resensitized the RITA-resistant cells to RITA while the phosphomimetic mutant at Ser276 (p65/S276D) led to RITA resistance of naïve cells. In mouse xenografts, intratumoral delivery of the phosphomimetic p65/S536D mutant increased the antitumor activity of RITA. Furthermore, in the RITA-resistant cells ATP-binding cassette transporter ABCC6 was upregulated, and silencing of ABCC6 expression in these cells restored RITA sensitivity. In the naïve cells, ABCC6 delivery led to RITA resistance and blockage of p65/S536D mutant-induced RITA sensitivity. Taken together, these data suggest that the site-differential phosphorylation of RelA/p65 modulates RITA sensitivity in cancer cells, which may provide an avenue to manipulate RITA resistance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
Investigation on the Core Bypass Flow in a Very High Temperature Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin
2013-10-22
Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less
Bistatic radar cross section of a perfectly conducting rhombus-shaped flat plate
NASA Astrophysics Data System (ADS)
Fenn, Alan J.
1990-05-01
The bistatic radar cross section of a perfectly conducting flat plate that has a rhombus shape (equilateral parallelogram) is investigated. The Ohio State University electromagnetic surface patch code (ESP version 4) is used to compute the theoretical bistatic radar cross section of a 35- x 27-in rhombus plate at 1.3 GHz over the bistatic angles 15 deg to 142 deg. The ESP-4 computer code is a method of moments FORTRAN-77 program which can analyze general configurations of plates and wires. This code has been installed and modified at Lincoln Laboratory on a SUN 3 computer network. Details of the code modifications are described. Comparisons of the method of moments simulations and measurements of the rhombus plate are made. It is shown that the ESP-4 computer code provides a high degree of accuracy in the calculation of copolarized and cross-polarized bistatic radar cross section patterns.
ASR4: A computer code for fitting and processing 4-gage anelastic strain recovery data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
A computer code for analyzing four-gage Anelastic Strain Recovery (ASR) data has been modified for use on a personal computer. This code fits the viscoelastic model of Warpinski and Teufel to measured ASR data, calculates the stress orientation directly, and computes stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and its calculates stress magnitudes using Blanton's approach, assuming sufficient input data are available. The program is written in FORTRAN, compiled with Ryan-McFarland Version 2.4. Graphics use PLOT88 software by Plotworks, Inc., but the graphics software must be obtained by themore » user because of licensing restrictions. A version without graphics can also be run. This code is available through the National Energy Software Center (NESC), operated by Argonne National Laboratory. 5 refs., 3 figs.« less
NASA Technical Reports Server (NTRS)
Rodal, J. J. A.; French, S. E.; Witmer, E. A.; Stagliano, T. R.
1979-01-01
The CIVM-JET 4C computer program for the 'finite strain' analysis of 2 d transient structural responses of complete or partial rings and beams subjected to fragment impact stored on tape as a series of individual files. Which subroutines are found in these files are described in detail. All references to the CIVM-JET 4C program are made assuming that the user has a copy of NASA CR-134907 (ASRL TR 154-9) which serves as a user's guide to (1) the CIVM-JET 4B computer code and (2) the CIVM-JET 4C computer code 'with the use of the modified input instructions' attached hereto.
The probability of containment failure by direct containment heating in Zion. Supplement 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, M.M.; Allen, M.D.; Stamps, D.W.
1994-12-01
Supplement 1 of NUREG/CR-6075 brings to closure the DCH issue for the Zion plant. It includes the documentation of the peer review process for NUREG/CR-6075, the assessments of four new splinter scenarios defined in working group meetings, and modeling enhancements recommended by the working groups. In the four new scenarios, consistency of the initial conditions has been implemented by using insights from systems-level codes. SCDAP/RELAP5 was used to analyze three short-term station blackout cases with Different lead rates. In all three case, the hot leg or surge line failed well before the lower head and thus the primary system depressurizedmore » to a point where DCH was no longer considered a threat. However, these calculations were continued to lower head failure in order to gain insights that were useful in establishing the initial and boundary conditions. The most useful insights are that the RCS pressure is-low at vessel breach metallic blockages in the core region do not melt and relocate into the lower plenum, and melting of upper plenum steel is correlated with hot leg failure. THE SCDAP/RELAP output was used as input to CONTAIN to assess the containment conditions at vessel breach. The containment-side conditions predicted by CONTAIN are similar to those originally specified in NUREG/CR-6075.« less
Analysis on the Role of RSG-GAS Pool Cooling System during Partial Loss of Heat Sink Accident
NASA Astrophysics Data System (ADS)
Susyadi; Endiah, P. H.; Sukmanto, D.; Andi, S. E.; Syaiful, B.; Hendro, T.; Geni, R. S.
2018-02-01
RSG-GAS is a 30 MW reactor that is mostly used for radioisotope production and experimental activities. Recently, it is regularly operated at half of its capacity for efficiency reason. During an accident, especially loss of heat sink, the role of its pool cooling system is very important to dump decay heat. An analysis using single failure approach and partial modeling of RELAP5 performed by S. Dibyo, 2010 shows that there is no significant increase in the coolant temperature if this system is properly functioned. However lessons learned from the Fukushima accident revealed that an accident can happen due to multiple failures. Considering ageing of the reactor, in this research the role of pool cooling system is to be investigated for a partial loss of heat sink accident which is at the same time the protection system fails to scram the reactor when being operated at 15 MW. The purpose is to clarify the transient characteristics and the final state of the coolant temperature. The method used is by simulating the system in RELAP5 code. Calculation results shows the pool cooling systems reduce coolant temperature for about 1 K as compared without activating them. The result alsoreveals that when the reactor is being operated at half of its rated power, it is still in safe condition for a partial loss of heat sink accident without scram.
Computer Description of the Field Artillery Ammunition Supply Vehicle
1983-04-01
Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and
Modeling the transport of nitrogen in an NPP-2006 reactor circuit
NASA Astrophysics Data System (ADS)
Stepanov, O. E.; Galkin, I. Yu.; Sledkov, R. M.; Melekh, S. S.; Strebnev, N. A.
2016-07-01
Efficient radiation protection of the public and personnel requires detecting an accident-initiating event quickly. Specifically, if a heat-exchange tube in a steam generator is ruptured, the 16N radioactive nitrogen isotope, which contributes to a sharp increase in the steam activity before the turbine, may serve as the signaling component. This isotope is produced in the core coolant and is transported along the circulation circuit. The aim of the present study was to model the transport of 16N in the primary and the secondary circuits of a VVER-1000 reactor facility (RF) under nominal operation conditions. KORSAR/GP and RELAP5/Mod.3.2 codes were used to perform the calculations. Computational models incorporating the major components of the primary and the secondary circuits of an NPP-2006 RF were constructed. These computational models were subjected to cross-verification, and the calculation results were compared to the experimental data on the distribution of the void fraction over the steam generator height. The models were proven to be valid. It was found that the time of nitrogen transport from the core to the heat-exchange tube leak was no longer than 1 s under RF operation at a power level of 100% N nom with all primary circuit pumps activated. The time of nitrogen transport from the leak to the γ-radiation detection unit under the same operating conditions was no longer than 9 s, and the nitrogen concentration in steam was no less than 1.4% (by mass) of its concentration at the reactor outlet. These values were obtained using conservative approaches to estimating the leak flow and the transport time, but the radioactive decay of nitrogen was not taken into account. Further research concerned with the calculation of thermohydraulic processes should be focused on modeling the transport of nitrogen under RF operation with some primary circuit pumps deactivated.
High Temperature Test Facility Preliminary RELAP5-3D Input Model Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Paul David
A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.
Melo, A D B; Silveira, H; Bortoluzzi, C; Lara, L J; Garbossa, C A P; Preis, G; Costa, L B; Rostagno, M H
2016-10-17
In this study, we evaluated the effect of intestinal alkaline phosphatase (IAP) and sodium butyrate (NaBu) on lipopolysaccharide (LPS)-induced intestinal inflammation. Intestinal alkaline phosphatase and RelA/p65 (NF-κB) gene expressions in porcine jejunum explants were evaluated following exposure to sodium butyrate (NaBu) and essential oil from Brazilian red pepper (EO), alone or in combination with NaBu, as well as exogenous IAP with or without LPS challenge. Five piglets weighing approximately 20 kg each were sacrificed, and their jejunum were extracted. The tissues were segmented into 10 parts, which were exposed to 10 treatments. Gene expressions of IAP and RelA/p65 (NF-κB) in jejunal explants were evaluated via RT-PCR. We found that EO, NaBu, and exogenous IAP were able to up-regulate endogenous IAP and enhance RelA/p65 (NF-κB) gene expression. However, only NaBu and exogenous IAP down-regulated LPS-induced inflammatory response via RelA/p65 (NF-κB). In conclusion, we demonstrated that exogenous IAP and NaBu may be beneficial in attenuating LPS-induced intestinal inflammation.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
THERMAL DESIGN OF THE ITER VACUUM VESSEL COOLING SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, Juan J; Yoder Jr, Graydon L; Kim, Seokho H
RELAP5-3D models of the ITER Vacuum Vessel (VV) Primary Heat Transfer System (PHTS) have been developed. The design of the cooling system is described in detail, and RELAP5 results are presented. Two parallel pump/heat exchanger trains comprise the design one train is for full-power operation and the other is for emergency operation or operation at decay heat levels. All the components are located inside the Tokamak building (a significant change from the original configurations). The results presented include operation at full power, decay heat operation, and baking operation. The RELAP5-3D results confirm that the design can operate satisfactorily during bothmore » normal pulsed power operation and decay heat operation. All the temperatures in the coolant and in the different system components are maintained within acceptable operating limits.« less
Pretest analysis document for Test S-FS-7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.
This report documents the pretest calculations completed for Semiscale Test S-FS-7. This test will simulate a transient initiated by a 14.3% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The results of a RELAP5/MOD2/CY21 code calculation indicate that the test objectives for Test S-FS-7 can be achieved. The primary system overpressurization will occur but pose no threat to personnel ormore » to plant integrity. 3 refs., 15 figs., 5 tabs.« less
Pretest analysis document for Test S-FS-11
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.; Shaw, R.A.
This report documents the pretest calculations completed for Semiscale Test S-FS-11. This test will simulate a transient initiated by a 50% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represents normal operating conditions for a C-E System 80 nuclear plant. Prediction of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The results of a RELAP5/MOD2/CY21 code calculation indicate that the test objectives for Test S-FS-11 can be achieved. The primary system overpressurization will occur but pose no threat to personnel or plantmore » integrity. 3 refs., 15 figs., 5 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hove, W.; Van Laeken, K.; Bartsoen, L.
1995-09-01
To enable a more realistic and accurate calculation of the radiological consequences of a SGTR, a fission product transport model was developed. As the radiological releases strongly depend on the thermal-hydraulic transient, the model was included in the RELAP5 input decks of the Belgian NPPs. This enables the coupled calculation of the thermal-hydraulic transient and the radiological release. The fission product transport model tracks the concentration of the fission products in the primary circuit, in each of the SGs as well as in the condenser. This leads to a system of 6 coupled, first order ordinary differential equations with timemore » dependent coefficients. Flashing, scrubbing, atomisation and dry out of the break flow are accounted for. Coupling with the thermal-hydraulic calculation and correct modelling of the break position enables an accurate calculation of the mixture level above the break. Pre- and post-accident spiking in the primary circuit are introduced. The transport times in the FW-system and the SG blowdown system are also taken into account, as is the decontaminating effect of the primary make-up system and of the SG blowdown system. Physical input parameters such as the partition coefficients, half life times and spiking coefficients are explicitly introduced so that the same model can be used for iodine, caesium and noble gases.« less
A Computational Model for Observation in Quantum Mechanics.
1987-03-16
Interferometer experiment ............. 17 2.3 The EPR Paradox experiment ................. 22 3 The Computational Model, an Overview 28 4 Implementation 34...40 4.4 Code for the EPR paradox experiment ............... 46 4.5 Code for the double slit interferometer experiment ..... .. 50 5 Conclusions 59 A...particle run counter to fact. The EPR paradox experiment (see section 2.3) is hard to resolve with this class of models, collectively called hidden
Luco, Sophie; Delmas, Olivier; Vidalain, Pierre-Olivier; Tangy, Frédéric; Weil, Robert; Bourhy, Hervé
2012-01-01
NF-κB transcription factors are crucial for many cellular processes. NF-κB is activated by viral infections to induce expression of antiviral cytokines. Here, we identified a novel member of the human NF-κB family, denoted RelAp43, the nucleotide sequence of which contains several exons as well as an intron of the RelA gene. RelAp43 is expressed in all cell lines and tissues tested and exhibits all the properties of a NF-κB protein. Although its sequence does not include a transactivation domain, identifying it as a class I member of the NF-κB family, it is able to potentiate RelA-mediated transactivation and stabilize dimers comprising p50. Furthermore, RelAp43 stimulates the expression of HIAP1, IRF1, and IFN-β - three genes involved in cell immunity against viral infection. It is also targeted by the matrix protein of lyssaviruses, the agents of rabies, resulting in an inhibition of the NF-κB pathway. Taken together, our data provide the description of a novel functional member of the NF-κB family, which plays a key role in the induction of anti-viral innate immune response.
Vidalain, Pierre-Olivier; Tangy, Frédéric; Weil, Robert; Bourhy, Hervé
2012-01-01
NF-κB transcription factors are crucial for many cellular processes. NF-κB is activated by viral infections to induce expression of antiviral cytokines. Here, we identified a novel member of the human NF-κB family, denoted RelAp43, the nucleotide sequence of which contains several exons as well as an intron of the RelA gene. RelAp43 is expressed in all cell lines and tissues tested and exhibits all the properties of a NF-κB protein. Although its sequence does not include a transactivation domain, identifying it as a class I member of the NF-κB family, it is able to potentiate RelA-mediated transactivation and stabilize dimers comprising p50. Furthermore, RelAp43 stimulates the expression of HIAP1, IRF1, and IFN-β - three genes involved in cell immunity against viral infection. It is also targeted by the matrix protein of lyssaviruses, the agents of rabies, resulting in an inhibition of the NF-κB pathway. Taken together, our data provide the description of a novel functional member of the NF-κB family, which plays a key role in the induction of anti-viral innate immune response. PMID:23271966
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lv, Q.; Kraus, A.; Hu, R.
CFD analysis has been focused on important component-level phenomena using STARCCM+ to supplement the system analysis of integral system behavior. A notable area of interest was the cavity region. This area is of particular interest for CFD analysis due to the multi-dimensional flow and complex heat transfer (thermal radiation heat transfer and natural convection), which are not simulated directly by RELAP5. CFD simulations allow for the estimation of the boundary heat flux distribution along the riser tubes, which is needed in the RELAP5 simulations. The CFD results can also provide additional data to help establish what level of modeling detailmore » is necessary in RELAP5. It was found that the flow profiles in the cavity region are simpler for the water-based concept than for the air-cooled concept. The local heat flux noticeably increases axially, and is higher in the fins than in the riser tubes. These results were utilized in RELAP5 simulations as boundary conditions, to provide better temperature predictions in the system level analyses. It was also determined that temperatures were higher in the fins than the riser tubes, but within design limits for thermal stresses. Higher temperature predictions were identified in the edge fins, in part due to additional thermal radiation from the side cavity walls.« less
Computer code for the optimization of performance parameters of mixed explosive formulations.
Muthurajan, H; Sivabalan, R; Talawar, M B; Venugopalan, S; Gandhe, B R
2006-08-25
LOTUSES is a novel computer code, which has been developed for the prediction of various thermodynamic properties such as heat of formation, heat of explosion, volume of explosion gaseous products and other related performance parameters. In this paper, we report LOTUSES (Version 1.4) code which has been utilized for the optimization of various high explosives in different combinations to obtain maximum possible velocity of detonation. LOTUSES (Version 1.4) code will vary the composition of mixed explosives automatically in the range of 1-100% and computes the oxygen balance as well as the velocity of detonation for various compositions in preset steps. Further, the code suggests the compositions for which least oxygen balance and the higher velocity of detonation could be achieved. Presently, the code can be applied for two component explosive compositions. The code has been validated with well-known explosives like, TNT, HNS, HNF, TATB, RDX, HMX, AN, DNA, CL-20 and TNAZ in different combinations. The new algorithm incorporated in LOTUSES (Version 1.4) enhances the efficiency and makes it a more powerful tool for the scientists/researches working in the field of high energy materials/hazardous materials.
User's manual: Subsonic/supersonic advanced panel pilot code
NASA Technical Reports Server (NTRS)
Moran, J.; Tinoco, E. N.; Johnson, F. T.
1978-01-01
Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.
Sundar, Isaac K.; Chung, Sangwoon; Hwang, Jae-woong; Lapek, John D.; Bulger, Michael; Friedman, Alan E.; Yao, Hongwei; Davie, James R.; Rahman, Irfan
2012-01-01
Cigarette smoke (CS) causes sustained lung inflammation, which is an important event in the pathogenesis of chronic obstructive pulmonary disease (COPD). We have previously reported that IKKα (I kappaB kinase alpha) plays a key role in CS-induced pro-inflammatory gene transcription by chromatin modifications; however, the underlying role of downstream signaling kinase is not known. Mitogen- and stress-activated kinase 1 (MSK1) serves as a specific downstream NF-κB RelA/p65 kinase, mediating transcriptional activation of NF-κB-dependent pro-inflammatory genes. The role of MSK1 in nuclear signaling and chromatin modifications is not known, particularly in response to environmental stimuli. We hypothesized that MSK1 regulates chromatin modifications of pro-inflammatory gene promoters in response to CS. Here, we report that CS extract activates MSK1 in human lung epithelial (H292 and BEAS-2B) cell lines, human primary small airway epithelial cells (SAEC), and in mouse lung, resulting in phosphorylation of nuclear MSK1 (Thr581), phospho-acetylation of RelA/p65 at Ser276 and Lys310 respectively. This event was associated with phospho-acetylation of histone H3 (Ser10/Lys9) and acetylation of histone H4 (Lys12). MSK1 N- and C-terminal kinase-dead mutants, MSK1 siRNA-mediated knock-down in transiently transfected H292 cells, and MSK1 stable knock-down mouse embryonic fibroblasts significantly reduced CS extract-induced MSK1, NF-κB RelA/p65 activation, and posttranslational modifications of histones. CS extract/CS promotes the direct interaction of MSK1 with RelA/p65 and p300 in epithelial cells and in mouse lung. Furthermore, CS-mediated recruitment of MSK1 and its substrates to the promoters of NF-κB-dependent pro-inflammatory genes leads to transcriptional activation, as determined by chromatin immunoprecipitation. Thus, MSK1 is an important downstream kinase involved in CS-induced NF-κB activation and chromatin modifications, which have implications in pathogenesis of COPD. PMID:22312446
Code of Federal Regulations, 2010 CFR
2010-07-01
... Administration COURT SERVICES AND OFFENDER SUPERVISION AGENCY FOR THE DISTRICT OF COLUMBIA DISCLOSURE OF RECORDS... proprietary interest in the information. (e) Computer software means tools by which records are created, stored, and retrieved. Normally, computer software, including source code, object code, and listings of...
Highly fault-tolerant parallel computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, D.A.
We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Ronaldo C.; D'Auria, Francesco; Alvim, Antonio Carlos M.
2002-07-01
The Code with - the capability of - Internal Assessment of Uncertainty (CIAU) is a tool proposed by the 'Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione (DIMNP)' of the University of Pisa. Other Institutions including the nuclear regulatory body from Brazil, 'Comissao Nacional de Energia Nuclear', contributed to the development of the tool. The CIAU aims at providing the currently available Relap5/Mod3.2 system code with the integrated capability of performing not only relevant transient calculations but also the related estimates of uncertainty bands. The Uncertainty Methodology based on Accuracy Extrapolation (UMAE) is used to characterize the uncertainty in themore » prediction of system code calculations for light water reactors and is internally coupled with the above system code. Following an overview of the CIAU development, the present paper deals with the independent qualification of the tool. The qualification test is performed by estimating the uncertainty bands that should envelope the prediction of the Angra 1 NPP transient RES-11. 99 originated by an inadvertent complete load rejection that caused the reactor scram when the unit was operating at 99% of nominal power. The current limitation of the 'error' database, implemented into the CIAU prevented a final demonstration of the qualification. However, all the steps for the qualification process are demonstrated. (authors)« less
Joint Services Electronics Program Annual Progress Report.
1985-11-01
one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
Thermodynamic and transport properties of gaseous tetrafluoromethane in chemical equilibrium
NASA Technical Reports Server (NTRS)
Hunt, J. L.; Boney, L. R.
1973-01-01
Equations and in computer code are presented for the thermodynamic and transport properties of gaseous, undissociated tetrafluoromethane (CF4) in chemical equilibrium. The computer code calculates the thermodynamic and transport properties of CF4 when given any two of five thermodynamic variables (entropy, temperature, volume, pressure, and enthalpy). Equilibrium thermodynamic and transport property data are tabulated and pressure-enthalpy diagrams are presented.
Risk-Informed External Hazards Analysis for Seismic and Flooding Phenomena for a Generic PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, Carlo; Prescott, Steve; Ma, Zhegang
This report describes the activities performed during the FY2017 for the US-DOE Light Water Reactor Sustainability Risk-Informed Safety Margin Characterization (LWRS-RISMC), Industry Application #2. The scope of Industry Application #2 is to deliver a risk-informed external hazards safety analysis for a representative nuclear power plant. Following the advancements occurred during the previous FYs (toolkits identification, models development), FY2017 focused on: increasing the level of realism of the analysis; improving the tools and the coupling methodologies. In particular the following objectives were achieved: calculation of buildings pounding and their effects on components seismic fragility; development of a SAPHIRE code PRA modelsmore » for 3-loops Westinghouse PWR; set-up of a methodology for performing static-dynamic PRA coupling between SAPHIRE and EMRALD codes; coupling RELAP5-3D/RAVEN for performing Best-Estimate Plus Uncertainty analysis and automatic limit surface search; and execute sample calculations for demonstrating the capabilities of the toolkit in performing a risk-informed external hazards safety analyses.« less
RELAP5 Application to Accident Analysis of the NIST Research Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baek, J.; Cuadra Gascon, A.; Cheng, L.Y.
Detailed safety analyses have been performed for the 20 MW D{sub 2}O moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The time-dependent analysis of the primary system is determined with a RELAP5 transient analysis model that includes the reactor vessel, the pump, heat exchanger, fuel element geometry, and flow channels for both the six inner and twenty-four outer fuel elements. A post-processing of the simulation results has been conducted to evaluate minimum critical heat flux ratio (CHFR) using the Sudo-Kaminaga correlation. Evaluations are performed for the following accidents: (1) the control rod withdrawal startup accidentmore » and (2) the maximum reactivity insertion accident. In both cases the RELAP5 results indicate that there is adequate margin to CHF and no damage to the fuel will occur because of sufficient coolant flow through the fuel channels and the negative scram reactivity insertion.« less
NASA Technical Reports Server (NTRS)
Norment, H. G.
1980-01-01
Calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Any subsonic, external, non-lifting flow can be accommodated; flow into, but not through, inlets also can be simulated. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Code descriptions include operating instructions, card inputs and printouts for example problems, and listing of the FORTRAN codes. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Nonlinear Computational Aeroelasticity: Formulations and Solution Algorithms
2003-03-01
problem is proposed. Fluid-structure coupling algorithms are then discussed with some emphasis on distributed computing strategies. Numerical results...the structure and the exchange of structure motion to the fluid. The computational fluid dynamics code PFES is our finite element code for the numerical ...unstructured meshes). It was numerically demonstrated [1-3] that EBS can be less diffusive than SUPG [4-6] and the standard Finite Volume schemes
NASA Technical Reports Server (NTRS)
Stagliano, T. R.; Witmer, E. A.; Rodal, J. J. A.
1979-01-01
Finite element modeling alternatives as well as the utility and limitations of the two dimensional structural response computer code CIVM-JET 4B for predicting the transient, large deflection, elastic plastic, structural responses of two dimensional beam and/or ring structures which are subjected to rigid fragment impact were investigated. The applicability of the CIVM-JET 4B analysis and code for the prediction of steel containment ring response to impact by complex deformable fragments from a trihub burst of a T58 turbine rotor was studied. Dimensional analysis considerations were used in a parametric examination of data from engine rotor burst containment experiments and data from sphere beam impact experiments. The use of the CIVM-JET 4B computer code for making parametric structural response studies on both fragment-containment structure and fragment-deflector structure was illustrated. Modifications to the analysis/computation procedure were developed to alleviate restrictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, R. W.; Petrov, Yu. V.
2013-12-03
Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less
Pretest analysis document for Test S-FS-6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, R.A.; Hall, D.G.
This report documents the pretest analyses completed for Semiscale Test S-FS-6. This test will simulate a transient initiated by a 100% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The enclosed analyses include a RELAP5/MOD2/CY21 code calculation and preliminary results from a facility hot, integrated test which was conducted to near S-FS-6 specifications. The results of these analyses indicate thatmore » the test objectives for Test S-FS-6 can be achieved. The primary system overpressurization will pose no threat to personnel or plant integrity.« less
Pretest analysis document for Semiscale Test S-FS-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, T.H.
This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY21 code for Semiscale Test S-FS-1. The test will simulate the double-ended offset shear of the main steam line at the exit of the broken loop steam generator (downstream of the flow restrictor) and the subsequent plant recovery. The recovery portion of the test consists of a plant stabilization phase and a plant cooldown phase. The recovery procedures involve normal charging/letdown operation, pressurizer heater operation, secondary steam and feed of the unaffected steam generator, and pressurizer auxiliary spray. The test will be terminated after the unaffected steam generator and pressurizermore » pressures and liquid levels are stable, and the average priamry fluid temperature is stable at about 480 K (405/sup 0/F) for at least 10 minutes.« less
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
CFL3D Version 6.4-General Usage and Aeroelastic Analysis
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Rumsey, Christopher L.; Biedron, Robert T.
2006-01-01
This document contains the course notes on the computational fluid dynamics code CFL3D version 6.4. It is intended to provide from basic to advanced users the information necessary to successfully use the code for a broad range of cases. Much of the course covers capability that has been a part of previous versions of the code, with material compiled from a CFL3D v5.0 manual and from the CFL3D v6 web site prior to the current release. This part of the material is presented to users of the code not familiar with computational fluid dynamics. There is new capability in CFL3D version 6.4 presented here that has not previously been published. There are also outdated features no longer used or recommended in recent releases of the code. The information offered here supersedes earlier manuals and updates outdated usage. Where current usage supersedes older versions, notation of that is made. These course notes also provides hints for usage, code installation and examples not found elsewhere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N. A. Anderson; P. Sabharwall
2014-01-01
The Next Generation Nuclear Plant project is aimed at the research and development of a helium-cooled high-temperature gas reactor that could generate both electricity and process heat for the production of hydrogen. The heat from the high-temperature primary loop must be transferred via an intermediate heat exchanger to a secondary loop. Using RELAP5-3D, a model was developed for two of the heat exchanger options a printed-circuit heat exchanger and a helical-coil steam generator. The RELAP5-3D models were used to simulate an exponential decrease in pressure over a 20 second period. The results of this loss of coolant analysis indicate thatmore » heat is initially transferred from the primary loop to the secondary loop, but after the decrease in pressure in the primary loop the heat is transferred from the secondary loop to the primary loop. A high-temperature gas reactor model should be developed and connected to the heat transfer component to simulate other transients.« less
Bhaskaran, Natarajan; Shukla, Sanjeev; Srivastava, Janmejai K; Gupta, Sanjay
2010-01-01
Chamomile has long been used in traditional medicine for the treatment of inflammation-related disorders. In this study we aimed to investigate the inhibitory effects of chamomile on nitric oxide (NO) production and inducible nitric oxide synthase (iNOS) expression, and to explore its potential anti-inflammatory mechanisms using RAW 264.7 macrophages. Chamomile treatment inhibited LPS-induced NO production and significantly blocked IL-1β , IL-6 and TNFα-induced NO levels in RAW 264.7 macrophages. Chamomile caused reduction in LPS-induced iNOS mRNA and protein expression. In RAW 264.7 macrophages, LPS-induced DNA binding activity of RelA/p65 was significantly inhibited by chamomile, an effect that was mediated through the inhibition of IKKβ , the upstream kinase regulating NF-κ B/Rel activity, and degradation of inhibitory factor-κ B. These results demonstrate that chamomile inhibits NO production and iNOS gene expression by inhibiting RelA/p65 activation and supports the utilization of chamomile as an effective anti-inflammatory agent. PMID:21042790
Bhaskaran, Natarajan; Shukla, Sanjeev; Srivastava, Janmejai K; Gupta, Sanjay
2010-12-01
Chamomile has long been used in traditional medicine for the treatment of inflammation-related disorders. In this study we investigated the inhibitory effects of chamomile on nitric oxide (NO) production and inducible nitric oxide synthase (iNOS) expression, and explored its potential anti-inflammatory mechanisms using RAW 264.7 macrophages. Chamomile treatment inhibited LPS-induced NO production and significantly blocked IL-1β, IL-6 and TNFα-induced NO levels in RAW 264.7 macrophages. Chamomile caused reduction in LPS-induced iNOS mRNA and protein expression. In RAW 264.7 macrophages, LPS-induced DNA binding activity of RelA/p65 was significantly inhibited by chamomile, an effect that was mediated through the inhibition of IKKβ, the upstream kinase regulating NF-κB/Rel activity, and degradation of inhibitory factor-κB. These results demonstrate that chamomile inhibits NO production and iNOS gene expression by inhibiting RelA/p65 activation and supports the utilization of chamomile as an effective anti-inflammatory agent.
Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Shin, Bangho; Kim, Chan Hyeong; Furuta, Takuya
2018-05-04
In this study, the multi-threading performance of the Geant4, MCNP6, and PHITS codes was evaluated as a function of the number of threads (N) and the complexity of the tetrahedral-mesh phantom. For this, three tetrahedral-mesh phantoms of varying complexity (simple, moderately complex, and highly complex) were prepared and implemented in the three different Monte Carlo codes, in photon and neutron transport simulations. Subsequently, for each case, the initialization time, calculation time, and memory usage were measured as a function of the number of threads used in the simulation. It was found that for all codes, the initialization time significantly increased with the complexity of the phantom, but not with the number of threads. Geant4 exhibited much longer initialization time than the other codes, especially for the complex phantom (MRCP). The improvement of computation speed due to the use of a multi-threaded code was calculated as the speed-up factor, the ratio of the computation speed on a multi-threaded code to the computation speed on a single-threaded code. Geant4 showed the best multi-threading performance among the codes considered in this study, with the speed-up factor almost linearly increasing with the number of threads, reaching ~30 when N = 40. PHITS and MCNP6 showed a much smaller increase of the speed-up factor with the number of threads. For PHITS, the speed-up factors were low when N = 40. For MCNP6, the increase of the speed-up factors was better, but they were still less than ~10 when N = 40. As for memory usage, Geant4 was found to use more memory than the other codes. In addition, compared to that of the other codes, the memory usage of Geant4 more rapidly increased with the number of threads, reaching as high as ~74 GB when N = 40 for the complex phantom (MRCP). It is notable that compared to that of the other codes, the memory usage of PHITS was much lower, regardless of both the complexity of the phantom and the number of threads, hardly increasing with the number of threads for the MRCP.
Accident Analysis for the NIST Research Reactor Before and After Fuel Conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baek J.; Diamond D.; Cuadra, A.
Postulated accidents have been analyzed for the 20 MW D2O-moderated research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The analysis has been carried out for the present core, which contains high enriched uranium (HEU) fuel and for a proposed equilibrium core with low enriched uranium (LEU) fuel. The analyses employ state-of-the-art calculational methods. Three-dimensional Monte Carlo neutron transport calculations were performed with the MCNPX code to determine homogenized fuel compositions in the lower and upper halves of each fuel element and to determine the resulting neutronic properties of the core. The accident analysis employed a modelmore » of the primary loop with the RELAP5 code. The model includes the primary pumps, shutdown pumps outlet valves, heat exchanger, fuel elements, and flow channels for both the six inner and twenty-four outer fuel elements. Evaluations were performed for the following accidents: (1) control rod withdrawal startup accident, (2) maximum reactivity insertion accident, (3) loss-of-flow accident resulting from loss of electrical power with an assumption of failure of shutdown cooling pumps, (4) loss-of-flow accident resulting from a primary pump seizure, and (5) loss-of-flow accident resulting from inadvertent throttling of a flow control valve. In addition, natural circulation cooling at low power operation was analyzed. The analysis shows that the conversion will not lead to significant changes in the safety analysis and the calculated minimum critical heat flux ratio and maximum clad temperature assure that there is adequate margin to fuel failure.« less
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
NASA Technical Reports Server (NTRS)
Norment, H. G.
1985-01-01
Subsonic, external flow about nonlifting bodies, lifting bodies or combinations of lifting and nonlifting bodies is calculated by a modified version of the Hess lifting code. Trajectory calculations can be performed for any atmospheric conditions and for all water drop sizes, from the smallest cloud droplet to large raindrops. Experimental water drop drag relations are used in the water drop equations of motion and effects of gravity settling are included. Inlet flow can be accommodated, and high Mach number compressibility effects are corrected for approximately. Seven codes are described: (1) a code used to debug and plot body surface description data; (2) a code that processes the body surface data to yield the potential flow field; (3) a code that computes flow velocities at arrays of points in space; (4) a code that computes water drop trajectories from an array of points in space; (5) a code that computes water drop trajectories and fluxes to arbitrary target points; (6) a code that computes water drop trajectories tangent to the body; and (7) a code that produces stereo pair plots which include both the body and trajectories. Accuracy of the calculations is discussed, and trajectory calculation results are compared with prior calculations and with experimental data.
Improvement of COBRA-TF for modeling of PWR cold- and hot-legs during reactor transients
NASA Astrophysics Data System (ADS)
Salko, Robert K.
COBRA-TF is a two-phase, three-field (liquid, vapor, droplets) thermal-hydraulic modeling tool that has been developed by the Pacific Northwest Laboratory under sponsorship of the NRC. The code was developed for Light Water Reactor analysis starting in the 1980s; however, its development has continued to this current time. COBRA-TF still finds wide-spread use throughout the nuclear engineering field, including nuclear-power vendors, academia, and research institutions. It has been proposed that extension of the COBRA-TF code-modeling region from vessel-only components to Pressurized Water Reactor (PWR) coolant-line regions can lead to improved Loss-of-Coolant Accident (LOCA) analysis. Improved modeling is anticipated due to COBRA-TF's capability to independently model the entrained-droplet flow-field behavior, which has been observed to impact delivery to the core region[1]. Because COBRA-TF was originally developed for vertically-dominated, in-vessel, sub-channel flow, extension of the COBRA-TF modeling region to the horizontal-pipe geometries of the coolant-lines required several code modifications, including: • Inclusion of the stratified flow regime into the COBRA-TF flow regime map, along with associated interfacial drag, wall drag and interfacial heat transfer correlations, • Inclusion of a horizontal-stratification force between adjacent mesh cells having unequal levels of stratified flow, and • Generation of a new code-input interface for the modeling of coolant-lines. The sheer number of COBRA-TF modifications that were required to complete this work turned this project into a code-development project as much as it was a study of thermal-hydraulics in reactor coolant-lines. The means for achieving these tasks shifted along the way, ultimately leading the development of a separate, nearly completely independent one-dimensional, two-phase-flow modeling code geared toward reactor coolant-line analysis. This developed code has been named CLAP, for Coolant-Line-Analysis Package. Versions were created that were both coupled to COBRA-TF and standalone, with the most recent version being a standalone code. This code performs a separate, simplified, 1-D solution of the conservation equations while making special considerations for coolant-line geometry and flow phenomena. The end of this project saw a functional code package that demonstrates a stable numerical solution and that has gone through a series of Validation and Verification tests using the Two-Phase Testing Facility (TPTF) experimental data[2]. The results indicate that CLAP is under-performing RELAP5-MOD3 in predicting the experimental void of the TPTF facility in some cases. There is no apparent pattern, however, to point to a consistent type of case that the code fails to predict properly (e.g., low-flow, high-flow, discharging to full vessel, or discharging to empty vessel). Pressure-profile predictions are sometimes unrealistic, which indicates that there may be a problem with test-case boundary conditions or with the coupling of continuity and momentum equations in the solution algorithm. The code does predict the flow regime correctly for all cases with the stratification-force model off. Turning the stratification model on can cause the low-flow case void profiles to over-react to the force and the flow regime to transition out of stratified flow. The code would benefit from an increased amount of Validation & Verification testing. The development of CLAP was significant, as it is a cleanly written, logical representation of the reactor coolant-line geometry. It is stable and capable of modeling basic flow physics in the reactor coolant-line. Code development and debugging required the temporary removal of the energy equation and mass-transfer terms in governing equations. The reintroduction of these terms will allow future coupling to RELAP and re-coupling with COBRA-TF. Adding in more applicable entrainment and de-entrainment models would allow the capture of more advanced physics in the coolant-line that can be expected during Loss-of-Coolant Accident. One of the package's benefits is its ability to be used as a platform for future coolant-line model development and implementation, including capturing of the important de-entrainment behavior in reactor hot-legs (steam-binding effect) and flow convection in the upper-plenum region of the vessel.
2015-06-01
events was ad - hoc and problematic due to time constraints and changing requirements. Determining errors in context and heuristics required expertise...area code ) 410-278-4678 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of Figures iv 1. Introduction 1...reduction code ...........8 1 1. Introduction Data reduction for analysis of Command, Control, Communications, and Computer (C4) network tests
Efficient full wave code for the coupling of large multirow multijunction LH grills
NASA Astrophysics Data System (ADS)
Preinhaelter, Josef; Hillairet, Julien; Milanesio, Daniele; Maggiora, Riccardo; Urban, Jakub; Vahala, Linda; Vahala, George
2017-11-01
The full wave code OLGA, for determining the coupling of a single row lower hybrid launcher (waveguide grills) to the plasma, is extended to handle multirow multijunction active passive structures (like the C3 and C4 launchers on TORE SUPRA) by implementing the scattering matrix formalism. The extended code is still computationally fast because of the use of (i) 2D splines of the plasma surface admittance in the accessibility region of the k-space, (ii) high order Gaussian quadrature rules for the integration of the coupling elements and (iii) utilizing the symmetries of the coupling elements in the multiperiodic structures. The extended OLGA code is benchmarked against the ALOHA-1D, ALOHA-2D and TOPLHA codes for the coupling of the C3 and C4 TORE SUPRA launchers for several plasma configurations derived from reflectometry and interferometery. Unlike nearly all codes (except the ALOHA-1D code), OLGA does not require large computational resources and can be used for everyday usage in planning experimental runs. In particular, it is shown that the OLGA code correctly handles the coupling of the C3 and C4 launchers over a very wide range of plasma densities in front of the grill.
Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; García, Sebastián Gimeno
2013-05-01
Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
Laser Signature Prediction Using The VALUE Computer Program
NASA Astrophysics Data System (ADS)
Akerman, Alexander; Hoffman, George A.; Patton, Ronald
1989-09-01
A variety of enhancements are being made to the 1976-vintage LASERX computer code. These include: - Surface characterization with BDRF tabular data - Specular reflection from transparent surfaces - Generation of glint direction maps - Generation of relative range imagery - Interface to the LOWTRAN atmospheric transmission code - Interface to the LEOPS laser sensor code - User friendly menu prompting for easy setup Versions of VALUE have been written for both VAX/VMS and PC/DOS computer environments. Outputs have also been revised to be user friendly and include tables, plots, and images for (1) intensity, (2) cross section,(3) reflectance, (4) relative range, (5) region type, and (6) silhouette.
MAGIC Computer Simulation. Volume 1: User Manual
1970-07-01
vulnerability and MAGIC programs. A three-digit code is assigned to each component of the target, such as armor, gun tube; and a two-digit code is assigned to...A review of the subject Magic Computer Simulation User and Analyst Manuals has been conducted based upon a request received from the US Army...1970 4. TITLE AND SUBTITLE MAGIC Computer Simulation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT
Analysis of Delays in Transmitting Time Code Using an Automated Computer Time Distribution System
1999-12-01
jlevine@clock. bldrdoc.gov Abstract An automated computer time distribution system broadcasts standard tune to users using computers and modems via...contributed to &lays - sofhareplatform (50% of the delay), transmission speed of time- codes (25OA), telephone network (lS%), modem and others (10’4). The... modems , and telephone lines. Users dial the ACTS server to receive time traceable to the national time scale of Singapore, UTC(PSB). The users can in
DYNAMIC MODELING STRATEGY FOR FLOW REGIME TRANSITION IN GAS-LIQUID TWO-PHASE FLOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
X. Wang; X. Sun; H. Zhao
In modeling gas-liquid two-phase flows, the concept of flow regime has been used to characterize the global interfacial structure of the flows. Nearly all constitutive relations that provide closures to the interfacial transfers in two-phase flow models, such as the two-fluid model, are often flow regime dependent. Currently, the determination of the flow regimes is primarily based on flow regime maps or transition criteria, which are developed for steady-state, fully-developed flows and widely applied in nuclear reactor system safety analysis codes, such as RELAP5. As two-phase flows are observed to be dynamic in nature (fully-developed two-phase flows generally do notmore » exist in real applications), it is of importance to model the flow regime transition dynamically for more accurate predictions of two-phase flows. The present work aims to develop a dynamic modeling strategy for determining flow regimes in gas-liquid two-phase flows through the introduction of interfacial area transport equations (IATEs) within the framework of a two-fluid model. The IATE is a transport equation that models the interfacial area concentration by considering the creation and destruction of the interfacial area, such as the fluid particle (bubble or liquid droplet) disintegration, boiling and evaporation; and fluid particle coalescence and condensation, respectively. For the flow regimes beyond bubbly flows, a two-group IATE has been proposed, in which bubbles are divided into two groups based on their size and shape (which are correlated), namely small bubbles and large bubbles. A preliminary approach to dynamically identifying the flow regimes is provided, in which discriminators are based on the predicted information, such as the void fraction and interfacial area concentration of small bubble and large bubble groups. This method is expected to be applied to computer codes to improve their predictive capabilities of gas-liquid two-phase flows, in particular for the applications in which flow regime transition occurs.« less
Leonard, Antony; Marando, Catherine; Rahman, Arshad
2013-01-01
Endothelial cell (EC) inflammation is a central event in the pathogenesis of many pulmonary diseases such as acute lung injury and its more severe form acute respiratory distress syndrome. Alterations in actin cytoskeleton are shown to be crucial for NF-κB regulation and EC inflammation. Previously, we have described a role of actin binding protein cofilin in mediating cytoskeletal alterations essential for NF-κB activation and EC inflammation. The present study describes a dynamic mechanism in which LIM kinase 1 (LIMK1), a cofilin kinase, and slingshot-1Long (SSH-1L), a cofilin phosphatase, are engaged by procoagulant and proinflammatory mediator thrombin to regulate these responses. Our data show that knockdown of LIMK1 destabilizes whereas knockdown of SSH-1L stabilizes the actin filaments through modulation of cofilin phosphorylation; however, in either case thrombin-induced NF-κB activity and expression of its target genes (ICAM-1 and VCAM-1) is inhibited. Further mechanistic analyses reveal that knockdown of LIMK1 or SSH-1L each attenuates nuclear translocation and thereby DNA binding of RelA/p65. In addition, LIMK1 or SSH-1L depletion inhibited RelA/p65 phosphorylation at Ser536, a critical event conferring transcriptional competency to the bound NF-κB. However, unlike SSH-1L, LIMK1 knockdown also impairs the release of RelA/p65 by blocking IKKβ-dependent phosphorylation/degradation of IκBα. Interestingly, LIMK1 or SSH-1L depletion failed to inhibit TNF-α-induced RelA/p65 nuclear translocation and proinflammatory gene expression. Thus this study provides evidence for a novel role of LIMK1 and SSH-1L in selectively regulating EC inflammation associated with intravascular coagulation. PMID:24039253
Leonard, Antony; Marando, Catherine; Rahman, Arshad; Fazal, Fabeha
2013-11-01
Endothelial cell (EC) inflammation is a central event in the pathogenesis of many pulmonary diseases such as acute lung injury and its more severe form acute respiratory distress syndrome. Alterations in actin cytoskeleton are shown to be crucial for NF-κB regulation and EC inflammation. Previously, we have described a role of actin binding protein cofilin in mediating cytoskeletal alterations essential for NF-κB activation and EC inflammation. The present study describes a dynamic mechanism in which LIM kinase 1 (LIMK1), a cofilin kinase, and slingshot-1Long (SSH-1L), a cofilin phosphatase, are engaged by procoagulant and proinflammatory mediator thrombin to regulate these responses. Our data show that knockdown of LIMK1 destabilizes whereas knockdown of SSH-1L stabilizes the actin filaments through modulation of cofilin phosphorylation; however, in either case thrombin-induced NF-κB activity and expression of its target genes (ICAM-1 and VCAM-1) is inhibited. Further mechanistic analyses reveal that knockdown of LIMK1 or SSH-1L each attenuates nuclear translocation and thereby DNA binding of RelA/p65. In addition, LIMK1 or SSH-1L depletion inhibited RelA/p65 phosphorylation at Ser(536), a critical event conferring transcriptional competency to the bound NF-κB. However, unlike SSH-1L, LIMK1 knockdown also impairs the release of RelA/p65 by blocking IKKβ-dependent phosphorylation/degradation of IκBα. Interestingly, LIMK1 or SSH-1L depletion failed to inhibit TNF-α-induced RelA/p65 nuclear translocation and proinflammatory gene expression. Thus this study provides evidence for a novel role of LIMK1 and SSH-1L in selectively regulating EC inflammation associated with intravascular coagulation.
ERIC Educational Resources Information Center
Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.
2016-01-01
For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…
Pretest analysis document for Test S-NH-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owca, W.A.
This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY3601 code for Semiscale MOD-2C Test S-NH-1. The test will simulate the shear of a small diameter penetration of a cold leg, equivalent to 0.5% of the cold leg flow area. The high pressure injection system is assumed to be inoperative throughout the transient. The recovery procedure consists of latching open both steam generator ADV's while feeding with auxiliary feedwater, and accumulator operation. Recovery will be initiated upon a peak cladding temperature of 811 K (1000/sup 0/F). The test will be terminated when primary pressure has been reduced to themore » low pressure injection system setpoint of 1.38 MPa (200 psia). The calculated results indicate that the test objectives can be achieved and the proposed test scenario poses no threat to personnel or to plant integrity. 12 figs.« less
MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1994-01-01
MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.
Quantum computing with Majorana fermion codes
NASA Astrophysics Data System (ADS)
Litinski, Daniel; von Oppen, Felix
2018-05-01
We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.
Modeling moving systems with RELAP5-3D
Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...
2015-12-04
RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
NASA Technical Reports Server (NTRS)
Rathjen, K. A.
1977-01-01
A digital computer code CAVE (Conduction Analysis Via Eigenvalues), which finds application in the analysis of two dimensional transient heating of hypersonic vehicles is described. The CAVE is written in FORTRAN 4 and is operational on both IBM 360-67 and CDC 6600 computers. The method of solution is a hybrid analytical numerical technique that is inherently stable permitting large time steps even with the best of conductors having the finest of mesh size. The aerodynamic heating boundary conditions are calculated by the code based on the input flight trajectory or can optionally be calculated external to the code and then entered as input data. The code computes the network conduction and convection links, as well as capacitance values, given basic geometrical and mesh sizes, for four generations (leading edges, cooled panels, X-24C structure and slabs). Input and output formats are presented and explained. Sample problems are included. A brief summary of the hybrid analytical-numerical technique, which utilizes eigenvalues (thermal frequencies) and eigenvectors (thermal mode vectors) is given along with aerodynamic heating equations that have been incorporated in the code and flow charts.
Method for rapid high-frequency seismogram calculation
NASA Astrophysics Data System (ADS)
Stabile, Tony Alfredo; De Matteis, Raffaella; Zollo, Aldo
2009-02-01
We present a method for rapid, high-frequency seismogram calculation that makes use of an algorithm to automatically generate an exhaustive set of seismic phases with an appreciable amplitude on the seismogram. The method uses a hierarchical order of ray and seismic-phase generation, taking into account some existing constraints for ray paths and some physical constraints. To compute synthetic seismograms, the COMRAD code (from the Italian: "COdice Multifase per il RAy-tracing Dinamico") uses as core a dynamic ray-tracing code. To validate the code, we have computed in a layered medium synthetic seismograms using both COMRAD and a code that computes the complete wave field by the discrete wave number method. The seismograms are compared according to a time-frequency misfit criteria based on the continuous wavelet transform of the signals. Although the number of phases is considerably reduced by the selection criteria, the results show that the loss in amplitude on the whole seismogram is negligible. Moreover, the time for the computing of the synthetics using the COMRAD code (truncating the ray series at the 10th generation) is 3-4-fold less than that needed for the AXITRA code (up to a frequency of 25 Hz).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.
CFD Based Computations of Flexible Helicopter Blades for Stability Analysis
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2011-01-01
As a collaborative effort among government aerospace research laboratories an advanced version of a widely used computational fluid dynamics code, OVERFLOW, was recently released. This latest version includes additions to model flexible rotating multiple blades. In this paper, the OVERFLOW code is applied to improve the accuracy of airload computations from the linear lifting line theory that uses displacements from beam model. Data transfers required at every revolution are managed through a Unix based script that runs jobs on large super-cluster computers. Results are demonstrated for the 4-bladed UH-60A helicopter. Deviations of computed data from flight data are evaluated. Fourier analysis post-processing that is suitable for aeroelastic stability computations are performed.
Performance of MCNP4A on seven computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, J.S.; Brockhoff, R.C.
1994-12-31
The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less
NASA Technical Reports Server (NTRS)
Walowit, Jed A.; Shapiro, Wilbur
2005-01-01
The SPIRALI code predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures. A derivation of the equations governing the performance of turbulent, incompressible, spiral groove cylindrical and face seals along with a description of their solution is given. The computer codes are described, including an input description, sample cases, and comparisons with results of other codes.
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
NASA Astrophysics Data System (ADS)
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.
Computer Aided Self-Forging Fragment Design,
1978-06-01
This value is reached so quickly that HEMP solutions using work hardening and those using only elastic—perfectly plastic formulations are quite...Elastic— Plastic Flow, UCRL—7322 , Lawrence Radiation Laboratory , Livermore , California (1969) . 4. Giroux , E. D . , HEMP Users Manual, UCRL—5l079...Laboratory, the HEMP computer code has been developed to serve as an effective design tool to simplify this task considerably. Using this code, warheads 78 06
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Developments in REDES: The rocket engine design expert system
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1990-01-01
The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.
Developments in REDES: The Rocket Engine Design Expert System
NASA Technical Reports Server (NTRS)
Davidian, Kenneth O.
1990-01-01
The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.
PHoToNs–A parallel heterogeneous and threads oriented code for cosmological N-body simulation
NASA Astrophysics Data System (ADS)
Wang, Qiao; Cao, Zong-Yan; Gao, Liang; Chi, Xue-Bin; Meng, Chen; Wang, Jie; Wang, Long
2018-06-01
We introduce a new code for cosmological simulations, PHoToNs, which incorporates features for performing massive cosmological simulations on heterogeneous high performance computer (HPC) systems and threads oriented programming. PHoToNs adopts a hybrid scheme to compute gravitational force, with the conventional Particle-Mesh (PM) algorithm to compute the long-range force, the Tree algorithm to compute the short range force and the direct summation Particle-Particle (PP) algorithm to compute gravity from very close particles. A self-similar space filling a Peano-Hilbert curve is used to decompose the computing domain. Threads programming is advantageously used to more flexibly manage the domain communication, PM calculation and synchronization, as well as Dual Tree Traversal on the CPU+MIC platform. PHoToNs scales well and efficiency of the PP kernel achieves 68.6% of peak performance on MIC and 74.4% on CPU platforms. We also test the accuracy of the code against the much used Gadget-2 in the community and found excellent agreement.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
Review and verification of CARE 3 mathematical model and code
NASA Technical Reports Server (NTRS)
Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.
1983-01-01
The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.
Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptionsmore » in HydroDyn are evaluated based on this code-to-code comparison.« less
Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies
NASA Technical Reports Server (NTRS)
Reyhner, T. A.
1982-01-01
An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.
User's manual for COAST 4: a code for costing and sizing tokamaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sink, D. A.; Iwinski, E. M.
1979-09-01
The purpose of this report is to document the computer program COAST 4 for the user/analyst. COAST, COst And Size Tokamak reactors, provides complete and self-consistent size models for the engineering features of D-T burning tokamak reactors and associated facilities involving a continuum of performance including highly beam driven through ignited plasma devices. TNS (The Next Step) devices with no tritium breeding or electrical power production are handled as well as power producing and fissile producing fusion-fission hybrid reactors. The code has been normalized with a TFTR calculation which is consistent with cost, size, and performance data published in themore » conceptual design report for that device. Information on code development, computer implementation and detailed user instructions are included in the text.« less
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R; Doan, Phuc N; Devasahayam, Nallathamby; Matsumoto, Shingo; Johnson, Calvin A; Cook, John A; Mitchell, James B; Subramanian, Sankaran; Krishna, Murali C
2009-01-01
Three-dimensional Oximetric Electron Paramagnetic Resonance Imaging using the Single Point Imaging modality generates unpaired spin density and oxygen images that can readily distinguish between normal and tumor tissues in small animals. It is also possible with fast imaging to track the changes in tissue oxygenation in response to the oxygen content in the breathing air. However, this involves dealing with gigabytes of data for each 3D oximetric imaging experiment involving digital band pass filtering and background noise subtraction, followed by 3D Fourier reconstruction. This process is rather slow in a conventional uniprocessor system. This paper presents a parallelization framework using OpenMP runtime support and parallel MATLAB to execute such computationally intensive programs. The Intel compiler is used to develop a parallel C++ code based on OpenMP. The code is executed on four Dual-Core AMD Opteron shared memory processors, to reduce the computational burden of the filtration task significantly. The results show that the parallel code for filtration has achieved a speed up factor of 46.66 as against the equivalent serial MATLAB code. In addition, a parallel MATLAB code has been developed to perform 3D Fourier reconstruction. Speedup factors of 4.57 and 4.25 have been achieved during the reconstruction process and oximetry computation, for a data set with 23 x 23 x 23 gradient steps. The execution time has been computed for both the serial and parallel implementations using different dimensions of the data and presented for comparison. The reported system has been designed to be easily accessible even from low-cost personal computers through local internet (NIHnet). The experimental results demonstrate that the parallel computing provides a source of high computational power to obtain biophysical parameters from 3D EPR oximetric imaging, almost in real-time.
Computations of the Magnus effect for slender bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Sturek, W. B.; Schiff, L. B.
1980-01-01
A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.
1980-08-01
knots Figure 14. Current profile. 84 6; * .4. 0 E U U U -~ U U (.4 U @0 85 I UECfLI ?E)r eAtE NjKC 7 frCAd I o .,01 U.I 75o* ANL I U,) I000. 0.) AKC 3 U...NAVSCOLCECOFF C35 Port Hueneme, CA NAVSEASYSCOM Code SEA OOC Washington. DC NAVSEC Code 6034 (Library), Washington DC NAVSHIPREPFAC Library. Guam NAVSHIPYD Code
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.
Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H
2001-03-01
The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.
Fazal, Fabeha; Bijli, Kaiser M.; Minhajuddin, Mohd; Rein, Theo; Finkelstein, Jacob N.; Rahman, Arshad
2009-01-01
Activation of RhoA/Rho-associated kinase (ROCK) pathway and the associated changes in actin cytoskeleton induced by thrombin are crucial for activation of NF-κB and expression of its target gene ICAM-1 in endothelial cells. However, the events acting downstream of RhoA/ROCK to mediate these responses remain unclear. Here, we show a central role of cofilin-1, an actin-binding protein that promotes actin depolymerization, in linking RhoA/ROCK pathway to dynamic alterations in actin cytoskeleton that are necessary for activation of NF-κB and thereby expression of ICAM-1 in these cells. Stimulation of human umbilical vein endothelial cells with thrombin resulted in Ser3 phosphorylation/inactivation of cofilin and formation of actin stress fibers in a ROCK-dependent manner. RNA interference knockdown of cofilin-1 stabilized the actin filaments and inhibited thrombin- and RhoA-induced NF-κB activity. Similarly, constitutively inactive mutant of cofilin-1 (Cof1-S3D), known to stabilize the actin cytoskeleton, inhibited NF-κB activity by thrombin. Overexpression of wild type cofilin-1 or constitutively active cofilin-1 mutant (Cof1-S3A), known to destabilize the actin cytoskeleton, also impaired thrombin-induced NF-κB activity. Additionally, depletion of cofilin-1 was associated with a marked reduction in ICAM-1 expression induced by thrombin. The effect of cofilin-1 depletion on NF-κB activity and ICAM-1 expression occurred downstream of IκBα degradation and was a result of impaired RelA/p65 nuclear translocation and consequently, RelA/p65 binding to DNA. Together, these data show that cofilin-1 occupies a central position in RhoA-actin pathway mediating nuclear translocation of RelA/p65 and expression of ICAM-1 in endothelial cells. PMID:19483084
Fazal, Fabeha; Bijli, Kaiser M; Minhajuddin, Mohd; Rein, Theo; Finkelstein, Jacob N; Rahman, Arshad
2009-07-31
Activation of RhoA/Rho-associated kinase (ROCK) pathway and the associated changes in actin cytoskeleton induced by thrombin are crucial for activation of NF-kappaB and expression of its target gene ICAM-1 in endothelial cells. However, the events acting downstream of RhoA/ROCK to mediate these responses remain unclear. Here, we show a central role of cofilin-1, an actin-binding protein that promotes actin depolymerization, in linking RhoA/ROCK pathway to dynamic alterations in actin cytoskeleton that are necessary for activation of NF-kappaB and thereby expression of ICAM-1 in these cells. Stimulation of human umbilical vein endothelial cells with thrombin resulted in Ser(3) phosphorylation/inactivation of cofilin and formation of actin stress fibers in a ROCK-dependent manner. RNA interference knockdown of cofilin-1 stabilized the actin filaments and inhibited thrombin- and RhoA-induced NF-kappaB activity. Similarly, constitutively inactive mutant of cofilin-1 (Cof1-S3D), known to stabilize the actin cytoskeleton, inhibited NF-kappaB activity by thrombin. Overexpression of wild type cofilin-1 or constitutively active cofilin-1 mutant (Cof1-S3A), known to destabilize the actin cytoskeleton, also impaired thrombin-induced NF-kappaB activity. Additionally, depletion of cofilin-1 was associated with a marked reduction in ICAM-1 expression induced by thrombin. The effect of cofilin-1 depletion on NF-kappaB activity and ICAM-1 expression occurred downstream of IkappaBalpha degradation and was a result of impaired RelA/p65 nuclear translocation and consequently, RelA/p65 binding to DNA. Together, these data show that cofilin-1 occupies a central position in RhoA-actin pathway mediating nuclear translocation of RelA/p65 and expression of ICAM-1 in endothelial cells.
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
electromagnetics, eddy current, computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gartling, David
TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.
Culbertson, C N; Wangerin, K; Ghandourah, E; Jevremovic, T
2005-08-01
The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for neutron capture therapy related modeling. A boron neutron capture therapy model was analyzed comparing COG calculational results to results from the widely used MCNP4B (Monte Carlo N-Particle) transport code. The approach for computing neutron fluence rate and each dose component relevant in boron neutron capture therapy is described, and calculated values are shown in detail. The differences between the COG and MCNP predictions are qualified and quantified. The differences are generally small and suggest that the COG code can be applied for BNCT research related problems.
Benchmark Simulation of Natural Circulation Cooling System with Salt Working Fluid Using SAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, K. K.; Scarlat, R. O.; Hu, R.
Liquid salt-cooled reactors, such as the Fluoride Salt-Cooled High-Temperature Reactor (FHR), offer passive decay heat removal through natural circulation using Direct Reactor Auxiliary Cooling System (DRACS) loops. The behavior of such systems should be well-understood through performance analysis. The advanced system thermal-hydraulics tool System Analysis Module (SAM) from Argonne National Laboratory has been selected for this purpose. The work presented here is part of a larger study in which SAM modeling capabilities are being enhanced for the system analyses of FHR or Molten Salt Reactors (MSR). Liquid salt thermophysical properties have been implemented in SAM, as well as properties ofmore » Dowtherm A, which is used as a simulant fluid for scaled experiments, for future code validation studies. Additional physics modules to represent phenomena specific to salt-cooled reactors, such as freezing of coolant, are being implemented in SAM. This study presents a useful first benchmark for the applicability of SAM to liquid salt-cooled reactors: it provides steady-state and transient comparisons for a salt reactor system. A RELAP5-3D model of the Mark-1 Pebble-Bed FHR (Mk1 PB-FHR), and in particular its DRACS loop for emergency heat removal, provides steady state and transient results for flow rates and temperatures in the system that are used here for code-to-code comparison with SAM. The transient studied is a loss of forced circulation with SCRAM event. To the knowledge of the authors, this is the first application of SAM to FHR or any other molten salt reactors. While building these models in SAM, any gaps in the code’s capability to simulate such systems are identified and addressed immediately, or listed as future improvements to the code.« less
Modeling Spectra of Icy Satellites and Cometary Icy Particles Using Multi-Sphere T-Matrix Code
NASA Astrophysics Data System (ADS)
Kolokolova, Ludmilla; Mackowski, Daniel; Pitman, Karly M.; Joseph, Emily C. S.; Buratti, Bonnie J.; Protopapa, Silvia; Kelley, Michael S.
2016-10-01
The Multi-Sphere T-matrix code (MSTM) allows rigorous computations of characteristics of the light scattered by a cluster of spherical particles. It was introduced to the scientific community in 1996 (Mackowski & Mishchenko, 1996, JOSA A, 13, 2266). Later it was put online and became one of the most popular codes to study photopolarimetric properties of aggregated particles. Later versions of this code, especially its parallelized version MSTM3 (Mackowski & Mishchenko, 2011, JQSRT, 112, 2182), were used to compute angular and wavelength dependence of the intensity and polarization of light scattered by aggregates of up to 4000 constituent particles (Kolokolova & Mackowski, 2012, JQSRT, 113, 2567). The version MSTM4 considers large thick slabs of spheres (Mackowski, 2014, Proc. of the Workshop ``Scattering by aggregates``, Bremen, Germany, March 2014, Th. Wriedt & Yu. Eremin, Eds., 6) and is significantly different from the earlier versions. It adopts a Discrete Fourier Convolution, implemented using a Fast Fourier Transform, for evaluation of the exciting field. MSTM4 is able to treat dozens of thousands of spheres and is about 100 times faster than the MSTM3 code. This allows us not only to compute the light scattering properties of a large number of electromagnetically interacting constituent particles, but also to perform multi-wavelength and multi-angular computations using computer resources with rather reasonable CPU and computer memory. We used MSTM4 to model near-infrared spectra of icy satellites of Saturn (Rhea, Dione, and Tethys data from Cassini VIMS), and of icy particles observed in the coma of comet 103P/Hartley 2 (data from EPOXI/DI HRII). Results of our modeling show that in the case of icy satellites the best fit to the observed spectra is provided by regolith made of spheres of radius ~1 micron with a porosity in the range 85% - 95%, which slightly varies for the different satellites. Fitting the spectra of the cometary icy particles requires icy aggregates of size larger than 40 micron with constituent spheres in the micron size range.
"SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres
NASA Astrophysics Data System (ADS)
Sapar, A.; Poolamäe, R.
2003-01-01
A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
Computer Code for the Determination of Ejection Seat/Man Aerodynamic Parameters.
1980-08-28
ARMS, and LES (computer code -- .,. ,... ,, ..,.., .: . .. ... ,-." . ;.’ -- I- ta names) and Seat consisted of 4 panels SEAT, BACK, PADD , and SIDE. An... general application of Eq. (I) is for blunt bodies at hypersonic speed, because accuracy of this equation becomes better at higher Mach number. Therefore...pressure coefficient is set equal to zero on those portions of the body that are invisible to a distant observer who views the body from the direction
NASA Technical Reports Server (NTRS)
Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.
1988-01-01
A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.
NASA Technical Reports Server (NTRS)
Walowit, Jed A.; Shapiro, Wibur
2005-01-01
This is the source listing of the computer code SPIRALI which predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures.
2008-03-28
in plane bending stiffness. Figure 4. Non-Symmetric General Buckling In accordance with equations (4) through (11), the...the DAPS3 version of the code documented in reference 1, the DAPS4 code computes the stresses and deflections, interbay buckling pressure, general ... plane and out- of- plane bending , eliminating the simple support assumption at the bay ends. b. Stresses and deflections at all points between the
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...
Enterovirus 71 2C Protein Inhibits NF-κB Activation by Binding to RelA(p65)
Du, Haiwei; Yin, Peiqi; Yang, Xiaojie; Zhang, Leiliang; Jin, Qi; Zhu, Guofeng
2015-01-01
Viruses evolve multiple ways to interfere with NF-κB signaling, a key regulator of innate and adaptive immunity. Enterovirus 71 (EV71) is one of primary pathogens that cause hand-foot-mouth disease. Here, we identify RelA(p65) as a novel binding partner for EV71 2C protein from yeast two-hybrid screen. By interaction with IPT domain of p65, 2C reduces the formation of heterodimer p65/p50, the predominant form of NF-κB. We also show that picornavirus 2C family proteins inhibit NF-κB activation and associate with p65 and IKKβ. Our findings provide a novel mechanism how EV71 antagonizes innate immunity. PMID:26394554
Computer Power: Part 1: Distribution of Power (and Communications).
ERIC Educational Resources Information Center
Price, Bennett J.
1988-01-01
Discussion of the distribution of power to personal computers and computer terminals addresses options such as extension cords, perimeter raceways, and interior raceways. Sidebars explain: (1) the National Electrical Code; (2) volts, amps, and watts; (3) transformers, circuit breakers, and circuits; and (4) power vs. data wiring. (MES)
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
NASA Technical Reports Server (NTRS)
Steinke, R. J.
1982-01-01
A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siefken, L.J.
1999-01-01
Models were designed to resolve deficiencies in the SCDAP/RELAP5/MOD3.2 calculations of the configuration and integrity of hot, partially oxidized cladding. These models are expected to improve the calculations of several important aspects of fuel rod behavior. First, an improved mapping was established from a compilation of PIE results from severe fuel damage tests of the configuration of melted metallic cladding that is retained by an oxide layer. The improved mapping accounts for the relocation of melted cladding in the circumferential direction. Then, rules based on PIE results were established for calculating the effect of cladding that has relocated from abovemore » on the oxidation and integrity of the lower intact cladding upon which it solidifies. Next, three different methods were identified for calculating the extent of dissolution of the oxidic part of the cladding due to its contact with the metallic part. The extent of dissolution effects the stress and thus the integrity of the oxidic part of the cladding. Then, an empirical equation was presented for calculating the stress in the oxidic part of the cladding and evaluating its integrity based on this calculated stress. This empirical equation replaces the current criterion for loss of integrity which is based on temperature and extent of oxidation. Finally, a new rule based on theoretical and experimental results was established for identifying the regions of a fuel rod with oxidation of both the inside and outside surfaces of the cladding. The implementation of these models is expected to eliminate the tendency of the SCDAP/RELAP5 code to overpredict the extent of oxidation of the upper part of fuel rods and to underpredict the extent of oxidation of the lower part of fuel rods and the part with a high concentration of relocated material. This report is a revision and reissue of the report entitled, Improvements in Modeling of Cladding Oxidation and Meltdown.« less
Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1994-01-01
Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.
Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI
NASA Astrophysics Data System (ADS)
Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan
2016-10-01
Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.
Decay Heat Removal in GEN IV Gas-Cooled Fast Reactors
Cheng, Lap-Yan; Wei, Thomas Y. C.
2009-01-01
The safety goal of the current designs of advanced high-temperature thermal gas-cooled reactors (HTRs) is that no core meltdown would occur in a depressurization event with a combination of concurrent safety system failures. This study focused on the analysis of passive decay heat removal (DHR) in a GEN IV direct-cycle gas-cooled fast reactor (GFR) which is based on the technology developments of the HTRs. Given the different criteria and design characteristics of the GFR, an approach different from that taken for the HTRs for passive DHR would have to be explored. Different design options based on maintaining core flow weremore » evaluated by performing transient analysis of a depressurization accident using the system code RELAP5-3D. The study also reviewed the conceptual design of autonomous systems for shutdown decay heat removal and recommends that future work in this area should be focused on the potential for Brayton cycle DHRs.« less
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-12-01
Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.
Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment
Meng, Bowen; Pratx, Guillem; Xing, Lei
2011-01-01
Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... License Exception. (2) Access and release restrictions. (i)[Reserved] (ii) Technology and source code. Technology and source code eligible for License Exception APP may not be released to nationals of Cuba, Iran...
1983-05-01
empirical erosion model, with use of the debris-layer model optional. 1.1 INTERFACE WITH ISPP ISPP is a collection of computer codes designed to calculate...expansion with the ODK code, 4. A two-dimensional, two-phase nozzle expansion with the TD2P code, 5. A turbulent boundary layer solution along the...INPUT THERMODYNAMIC DATA FOR TEMPERATURESBELOW 300°K OIF NEEDED) NO A• 11 READ SSP NAMELIST (ODE. BAL. ODK . TD2P. TEL. NOZZLE GEOMETRY) PROfLM 2
Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model
2014-09-19
the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, J.; Mowrey, J.
1995-12-01
This report describes the design, development and testing of process controls for selected system operations in the Browns Ferry Nuclear Plant (BFNP) Reactor Water Cleanup System (RWCU) using a Computer Simulation Platform which simulates the RWCU System and the BFNP Integrated Computer System (ICS). This system was designed to demonstrate the feasibility of the soft control (video touch screen) of nuclear plant systems through an operator console. The BFNP Integrated Computer System, which has recently. been installed at BFNP Unit 2, was simulated to allow for operator control functions of the modeled RWCU system. The BFNP Unit 2 RWCU systemmore » was simulated using the RELAP5 Thermal/Hydraulic Simulation Model, which provided the steady-state and transient RWCU process variables and simulated the response of the system to control system inputs. Descriptions of the hardware and software developed are also included in this report. The testing and acceptance program and results are also detailed in this report. A discussion of potential installation of an actual RWCU process control system in BFNP Unit 2 is included. Finally, this report contains a section on industry issues associated with installation of process control systems in nuclear power plants.« less
A decoding procedure for the Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lim, R. S.
1978-01-01
A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
NASA Astrophysics Data System (ADS)
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
32 CFR 295.3 - Definition of OIG records.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., decisions, or procedures of the OIG. Normally, computer software, including source code, object code, and... the underlying data which is processed and produced by such software and which may in some instances be stored with the software.) Exceptions to this position are outlined in § 295.4(c). (3) Anything...
32 CFR 295.3 - Definition of OIG records.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., decisions, or procedures of the OIG. Normally, computer software, including source code, object code, and... the underlying data which is processed and produced by such software and which may in some instances be stored with the software.) Exceptions to this position are outlined in § 295.4(c). (3) Anything...
32 CFR 295.3 - Definition of OIG records.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., decisions, or procedures of the OIG. Normally, computer software, including source code, object code, and... the underlying data which is processed and produced by such software and which may in some instances be stored with the software.) Exceptions to this position are outlined in § 295.4(c). (3) Anything...
32 CFR 295.3 - Definition of OIG records.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., decisions, or procedures of the OIG. Normally, computer software, including source code, object code, and... the underlying data which is processed and produced by such software and which may in some instances be stored with the software.) Exceptions to this position are outlined in § 295.4(c). (3) Anything...
32 CFR 295.3 - Definition of OIG records.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., decisions, or procedures of the OIG. Normally, computer software, including source code, object code, and... the underlying data which is processed and produced by such software and which may in some instances be stored with the software.) Exceptions to this position are outlined in § 295.4(c). (3) Anything...
Annual Report of the ECSU Home-Institution Support Program (1993)
1993-09-30
summer of 1992. Stephanie plans to attend graduate school at the University of Alabama at Birmingham. r 3 . Deborah Jones has attended the ISSP program for...computer equipment Component #2 A visiting lecturer series Component # 3 : Students pay & faculty release time Component #4 Student/sponsor travel program...DTXC QUA, ty rNpBT 3 S. 0. CODE: 1133 DISBURSING CODE: N001 79 AGO CODE: N66005 CAGE CODE: OJLKO 3 PART I: A succinct narrative which should
NASA Technical Reports Server (NTRS)
Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.
NASA Technical Reports Server (NTRS)
Reichel, R. H.; Hague, D. S.; Jones, R. T.; Glatt, C. R.
1973-01-01
This computer program manual describes in two parts the automated combustor design optimization code AUTOCOM. The program code is written in the FORTRAN 4 language. The input data setup and the program outputs are described, and a sample engine case is discussed. The program structure and programming techniques are also described, along with AUTOCOM program analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Thomas K.S.; Ko, F.-K
Although only a few percent of residual power remains during plant outages, the associated risk of core uncovery and corresponding fuel overheating has been identified to be relatively high, particularly under midloop operation (MLO) in pressurized water reactors. However, to analyze the system behavior during outages, the tools currently available, such as RELAP5, RETRAN, etc., cannot easily perform the task. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as MLO with the loss of residual heat removal (RHR), was developed. All important thermal-hydraulic processes involved during MLO with the loss of RHR will be properly simulatedmore » by the newly developed reactor outage simulation and evaluation (ROSE) code. Important processes during MLO with loss of RHR involve a pressurizer insurge caused by the hot-leg flooding, reflux condensation, liquid holdup inside the steam generator, loop-seal clearance, core-level depression, etc. Since the accuracy of the pressure distribution from the classical nodal momentum approach will be degraded when the system is stratified and under atmospheric pressure, the two-region approach with a modified two-fluid model will be the theoretical basis of the new program to analyze the nuclear steam supply system during plant outages. To verify the analytical model in the first step, posttest calculations against the closed integral midloop experiments with loss of RHR were performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility (IIST) test data is demonstrated.« less
ODECS -- A computer code for the optimal design of S.I. engine control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsie, I.; Pianese, C.; Rizzo, G.
1996-09-01
The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medin, Stanislav A.; Basko, Mikhail M.; Orlov, Yurii N.
2012-07-11
Radiation hydrodynamics 1D simulations were performed with two concurrent codes, DEIRA and RAMPHY. The DEIRA code was used for DT capsule implosion and burn, and the RAMPHY code was used for computation of X-ray and fast ions deposition in the first wall liquid film of the reactor chamber. The simulations were run for 740 MJ direct drive DT capsule and Pb thin liquid wall reactor chamber of 10 m diameter. Temporal profiles for DT capsule leaking power of X-rays, neutrons and fast {sup 4}He ions were obtained and spatial profiles of the liquid film flow parameter were computed and analyzed.
Extensions and improvements on XTRAN3S
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.
POLYSHIFT Communications Software for the Connection Machine System CM-200
George, William; Brickner, Ralph G.; Johnsson, S. Lennart
1994-01-01
We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less
NASA Technical Reports Server (NTRS)
Talcott, N. A., Jr.
1977-01-01
Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.
MODELING THE AMBIENT CONDITION EFFECTS OF AN AIR-COOLED NATURAL CIRCULATION SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Lisowski, Darius D.; Bucknor, Matthew
The Reactor Cavity Cooling System (RCCS) is a passive safety concept under consideration for the overall safety strategy of advanced reactors such as the High Temperature Gas-Cooled Reactor (HTGR). One such variant, air-cooled RCCS, uses natural convection to drive the flow of air from outside the reactor building to remove decay heat during normal operation and accident scenarios. The Natural convection Shutdown heat removal Test Facility (NSTF) at Argonne National Laboratory (“Argonne”) is a half-scale model of the primary features of one conceptual air-cooled RCCS design. The facility was constructed to carry out highly instrumented experiments to study the performancemore » of the RCCS concept for reactor decay heat removal that relies on natural convection cooling. Parallel modeling and simulation efforts were performed to support the design, operation, and analysis of the natural convection system. Throughout the testing program, strong influences of ambient conditions were observed in the experimental data when baseline tests were repeated under the same test procedures. Thus, significant analysis efforts were devoted to gaining a better understanding of these influences and the subsequent response of the NSTF to ambient conditions. It was determined that air humidity had negligible impacts on NSTF system performance and therefore did not warrant consideration in the models. However, temperature differences between the building exterior and interior air, along with the outside wind speed, were shown to be dominant factors. Combining the stack and wind effects together, an empirical model was developed based on theoretical considerations and using experimental data to correlate zero-power system flow rates with ambient meteorological conditions. Some coefficients in the model were obtained based on best fitting the experimental data. The predictive capability of the empirical model was demonstrated by applying it to the new set of experimental data. The empirical model was also implemented in the computational models of the NSTF using both RELAP5-3D and STARCCM+ codes. Accounting for the effects of ambient conditions, simulations from both codes predicted the natural circulation flow rates very well.« less
Smith, Daniel G A; Burns, Lori A; Sirianni, Dominic A; Nascimento, Daniel R; Kumar, Ashutosh; James, Andrew M; Schriber, Jeffrey B; Zhang, Tianyuan; Zhang, Boyi; Abbott, Adam S; Berquist, Eric J; Lechner, Marvin H; Cunha, Leonardo A; Heide, Alexander G; Waldrop, Jonathan M; Takeshita, Tyler Y; Alenaizan, Asem; Neuhauser, Daniel; King, Rollin A; Simmonett, Andrew C; Turney, Justin M; Schaefer, Henry F; Evangelista, Francesco A; DePrince, A Eugene; Crawford, T Daniel; Patkowski, Konrad; Sherrill, C David
2018-06-11
Psi4NumPy demonstrates the use of efficient computational kernels from the open-source Psi4 program through the popular NumPy library for linear algebra in Python to facilitate the rapid development of clear, understandable Python computer code for new quantum chemical methods, while maintaining a relatively low execution time. Using these tools, reference implementations have been created for a number of methods, including self-consistent field (SCF), SCF response, many-body perturbation theory, coupled-cluster theory, configuration interaction, and symmetry-adapted perturbation theory. Furthermore, several reference codes have been integrated into Jupyter notebooks, allowing background, underlying theory, and formula information to be associated with the implementation. Psi4NumPy tools and associated reference implementations can lower the barrier for future development of quantum chemistry methods. These implementations also demonstrate the power of the hybrid C++/Python programming approach employed by the Psi4 program.
Trellis coding with multidimensional QAM signal sets
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J.
1993-01-01
Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.
Navier-Stokes analysis of cold scramjet-afterbody flows
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.
1989-01-01
The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.
Pattern-based integer sample motion search strategies in the context of HEVC
NASA Astrophysics Data System (ADS)
Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas
2015-09-01
The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
Computation of transonic separated wing flows using an Euler/Navier-Stokes zonal approach
NASA Technical Reports Server (NTRS)
Kaynak, Uenver; Holst, Terry L.; Cantwell, Brian J.
1986-01-01
A computer program called Transonic Navier Stokes (TNS) has been developed which solves the Euler/Navier-Stokes equations around wings using a zonal grid approach. In the present zonal scheme, the physical domain of interest is divided into several subdomains called zones and the governing equations are solved interactively. The advantages of the Zonal Grid approach are as follows: (1) the grid for any subdomain can be generated easily; (2) grids can be, in a sense, adapted to the solution; (3) different equation sets can be used in different zones; and, (4) this approach allows for a convenient data base organization scheme. Using this code, separated flows on a NACA 0012 section wing and on the NASA Ames WING C have been computed. First, the effects of turbulence and artificial dissipation models incorporated into the code are assessed by comparing the TNS results with other CFD codes and experiments. Then a series of flow cases is described where data are available. The computed results, including cases with shock-induced separation, are in good agreement with experimental data. Finally, some futuristic cases are presented to demonstrate the abilities of the code for massively separated cases which do not have experimental data.
Development of the 3DHZETRN code for space radiation protection
NASA Astrophysics Data System (ADS)
Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert
Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.
A strong shock tube problem calculated by different numerical schemes
NASA Astrophysics Data System (ADS)
Lee, Wen Ho; Clancy, Sean P.
1996-05-01
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressure ratio of 109 and density ratio of 103 in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, W.H.; Clancy, S.P.
Calculated results are presented for the solution of a very strong shock tube problem on a coarse mesh using (1) MESA code, (2) UNICORN code, (3) Schulz hydro, and (4) modified TVD scheme. The first two codes are written in Eulerian coordinates, whereas methods (3) and (4) are in Lagrangian coordinates. MESA and UNICORN codes are both of second order and use different monotonic advection method to avoid the Gibbs phenomena. Code (3) uses typical artificial viscosity for inviscid flow, whereas code (4) uses a modified TVD scheme. The test problem is a strong shock tube problem with a pressuremore » ratio of 10{sup 9} and density ratio of 10{sup 3} in an ideal gas. For no mass-matching case, Schulz hydro is better than TVD scheme. In the case of mass-matching, there is no difference between them. MESA and UNICORN results are nearly the same. However, the computed positions such as the contact discontinuity (i.e. the material interface) are not as accurate as the Lagrangian methods. {copyright} {ital 1996 American Institute of Physics.}« less
NEAMS update quarterly report for January - March 2012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.S.; Hayes, S.; Pointer, D.
Quarterly highlights are: (1) The integration of Denovo and AMP was demonstrated in an AMP simulation of the thermo-mechanics of a complete fuel assembly; (2) Bison was enhanced with a mechanistic fuel cracking model; (3) Mechanistic algorithms were incorporated into various lower-length-scale models to represent fission gases and dislocations in UO2 fuels; (4) Marmot was improved to allow faster testing of mesoscale models using larger problem domains; (5) Component models of reactor piping were developed for use in Relap-7; (6) The mesh generator of Proteus was updated to accept a mesh specification from Moose and equations were formulated for themore » intermediate-fidelity Proteus-2D1D module; (7) A new pressure solver was implemented in Nek5000 and demonstrated to work 2.5 times faster than the previous solver; (8) Work continued on volume-holdup models for two fuel reprocessing operations: voloxidation and dissolution; (9) Progress was made on a pyroprocessing model and the characterization of pyroprocessing emission signatures; (10) A new 1D groundwater waste transport code was delivered to the used fuel disposition (UFD) campaign; (11) Efforts on waste form modeling included empirical simulation of sodium-borosilicate glass compositions; (12) The Waste team developed three prototypes for modeling hydride reorientation in fuel cladding during very long-term fuel storage; (13) A benchmark demonstration problem (fission gas bubble growth) was modeled to evaluate the capabilities of different meso-scale numerical methods; (14) Work continued on a hierarchical up-scaling framework to model structural materials by directly coupling dislocation dynamics and crystal plasticity; (15) New 'importance sampling' methods were developed and demonstrated to reduce the computational cost of rare-event inference; (16) The survey and evaluation of existing data and knowledge bases was updated for NE-KAMS; (17) The NEAMS Early User Program was launched; (18) The Nuclear Regulatory Commission (NRC) Office of Regulatory Research was introduced to the NEAMS program; (19) The NEAMS overall software quality assurance plan (SQAP) was revised to version 1.5; and (20) Work continued on NiCE and its plug-ins and other utilities, such as Cubit and VisIt.« less
Frederiksen, Anja L; Larsen, Martin J; Brusgaard, Klaus; Novack, Deborah V; Knudsen, Peter Juel Thiis; Schrøder, Henrik Daa; Qiu, Weimin; Eckhardt, Christina; McAlister, William H; Kassem, Moustapha; Mumm, Steven; Frost, Morten; Whyte, Michael P
2016-01-01
Heritable disorders that feature high bone mass (HBM) are rare. The etiology is typically a mutation(s) within a gene that regulates the differentiation and function of osteoblasts (OBs) or osteoclasts (OCs). Nevertheless, the molecular basis is unknown for approximately one-fifth of such entities. NF-κB signaling is a key regulator of bone remodeling and acts by enhancing OC survival while impairing OB maturation and function. The NF-κB transcription complex comprises five subunits. In mice, deletion of the p50 and p52 subunits together causes osteopetrosis (OPT). In humans, however, mutations within the genes that encode the NF-κB complex, including the Rela/p65 subunit, have not been reported. We describe a neonate who died suddenly and unexpectedly and was found at postmortem to have HBM documented radiographically and by skeletal histopathology. Serum was not available for study. Radiographic changes resembled malignant OPT, but histopathological investigation showed morphologically normal OCs and evidence of intact bone resorption excluding OPT. Furthermore, mutation analysis was negative for eight genes associated with OPT or HBM. Instead, accelerated bone formation appeared to account for the HBM. Subsequently, trio-based whole exome sequencing revealed a heterozygous de novo missense mutation (c.1534_1535delinsAG, p.Asp512Ser) in exon 11 of RELA encoding Rela/p65. The mutation was then verified using bidirectional Sanger sequencing. Lipopolysaccharide stimulation of patient fibroblasts elicited impaired NF-κB responses compared with healthy control fibroblasts. Five unrelated patients with unexplained HBM did not show a RELA defect. Ours is apparently the first report of a mutation within the NF-κB complex in humans. The missense change is associated with neonatal osteosclerosis from in utero increased OB function rather than failed OC action. These findings demonstrate the importance of the Rela/p65 subunit within the NF-κB pathway for human skeletal homeostasis and represent a new genetic cause of HBM. © 2015 American Society for Bone and Mineral Research.
Subscale Development of Advanced ABM Graphite/Epoxy Composite Structure
1978-01-01
laminate analysis computer code (Reference 5). eie output of this code yields lamina stresses and strains, equivalent elastic and shear modulii for the...was not accounted for. Therefore the net effect was that the analysis tended to yield conservative results. For design purposes, this conservative...extracted using a Soxhlet Extraction apparatus, recycling the solvent af least 4 to 10 times every hour for a minimum of 6 hours. (4) All samples are
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C C
The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less
NASA Astrophysics Data System (ADS)
Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang
2017-02-01
LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Modeling and Simulation of the ITER First Wall/Blanket Primary Heat Transfer System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ying, Alice; Popov, Emilian L
2011-01-01
ITER inductive power operation is modeled and simulated using a thermal-hydraulics system code (RELAP5) integrated with a 3-D CFD (SC-Tetra) code. The Primary Heat Transfer System (PHTS) functions are predicted together with the main parameters operational ranges. The control algorithm strategy and derivation are summarized as well. The First Wall and Blanket modules are the primary components of PHTS, used to remove the major part of the thermal heat from the plasma. The modules represent a set of flow channels in solid metal structure that serve to absorb the radiation heat and nuclear heating from the fusion reactions and tomore » provide shield for the vacuum vessel. The blanket modules are water cooled. The cooling is forced convective with constant blanket inlet temperature and mass flow rate. Three independent water loops supply coolant to the three blanket sectors. The main equipment of each loop consists of a pump, a steam pressurizer and a heat exchanger. A major feature of ITER is the pulsed operation. The plasma does not burn continuously, but on intervals with large periods of no power between them. This specific feature causes design challenges to accommodate the thermal expansion of the coolant during the pulse period and requires active temperature control to maintain a constant blanket inlet temperature.« less
Geant4 Computing Performance Benchmarking and Monitoring
Dotti, Andrea; Elvira, V. Daniel; Folger, Gunter; ...
2015-12-23
Performance evaluation and analysis of large scale computing applications is essential for optimal use of resources. As detector simulation is one of the most compute intensive tasks and Geant4 is the simulation toolkit most widely used in contemporary high energy physics (HEP) experiments, it is important to monitor Geant4 through its development cycle for changes in computing performance and to identify problems and opportunities for code improvements. All Geant4 development and public releases are being profiled with a set of applications that utilize different input event samples, physics parameters, and detector configurations. Results from multiple benchmarking runs are compared tomore » previous public and development reference releases to monitor CPU and memory usage. Observed changes are evaluated and correlated with code modifications. Besides the full summary of call stack and memory footprint, a detailed call graph analysis is available to Geant4 developers for further analysis. The set of software tools used in the performance evaluation procedure, both in sequential and multi-threaded modes, include FAST, IgProf and Open|Speedshop. In conclusion, the scalability of the CPU time and memory performance in multi-threaded application is evaluated by measuring event throughput and memory gain as a function of the number of threads for selected event samples.« less
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris, II; Millman, Daniel R.; Greendyke, Robert B.
1992-01-01
A computer code was developed that uses an implicit finite-difference technique to solve nonsimilar, axisymmetric boundary layer equations for both laminar and turbulent flow. The code can treat ideal gases, air in chemical equilibrium, and carbon tetrafluoride (CF4), which is a useful gas for hypersonic blunt-body simulations. This is the only known boundary layer code that can treat CF4. Comparisons with experimental data have demonstrated that accurate solutions are obtained. The method should prove useful as an analysis tool for comparing calculations with wind tunnel experiments and for making calculations about flight vehicles where equilibrium air chemistry assumptions are valid.
NASA Astrophysics Data System (ADS)
Hamilton, H. Harris, II; Millman, Daniel R.; Greendyke, Robert B.
1992-12-01
A computer code was developed that uses an implicit finite-difference technique to solve nonsimilar, axisymmetric boundary layer equations for both laminar and turbulent flow. The code can treat ideal gases, air in chemical equilibrium, and carbon tetrafluoride (CF4), which is a useful gas for hypersonic blunt-body simulations. This is the only known boundary layer code that can treat CF4. Comparisons with experimental data have demonstrated that accurate solutions are obtained. The method should prove useful as an analysis tool for comparing calculations with wind tunnel experiments and for making calculations about flight vehicles where equilibrium air chemistry assumptions are valid.
Fault-tolerance in Two-dimensional Topological Systems
NASA Astrophysics Data System (ADS)
Anderson, Jonas T.
This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev's surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical
Regulation of Endothelial Cell Inflammation and Lung PMN Infiltration by Transglutaminase 2
Bijli, Kaiser M.; Kanter, Bryce G.; Minhajuddin, Mohammad; Leonard, Antony; Xu, Lei; Fazal, Fabeha; Rahman, Arshad
2014-01-01
We addressed the role of transglutaminase2 (TG2), a calcium-dependent enzyme that catalyzes crosslinking of proteins, in the mechanism of endothelial cell (EC) inflammation and lung PMN infiltration. Exposure of EC to thrombin, a procoagulant and proinflammatory mediator, resulted in activation of the transcription factor NF-κB and its target genes, VCAM-1, MCP-1, and IL-6. RNAi knockdown of TG2 inhibited these responses. Analysis of NF-κB activation pathway showed that TG2 knockdown was associated with inhibition of thrombin-induced DNA binding as well as serine phosphorylation of RelA/p65, a crucial event that controls transcriptional capacity of the DNA-bound RelA/p65. These results implicate an important role for TG2 in mediating EC inflammation by promoting DNA binding and transcriptional activity of RelA/p65. Because thrombin is released in high amounts during sepsis and its concentration is elevated in plasma and lavage fluids of patients with Acute Respiratory Distress Syndrome (ARDS), we determined the in vivo relevance of TG2 in a mouse model of sepsis-induced lung PMN recruitment. A marked reduction in NF-κB activation, adhesion molecule expression, and lung PMN sequestration was observed in TG2 knockout mice compared to wild type mice exposed to endotoxemia. Together, these results identify TG2 as an important mediator of EC inflammation and lung PMN sequestration associated with intravascular coagulation and sepsis. PMID:25057925
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.
NASA Technical Reports Server (NTRS)
Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.
2007-05-01
35 5 Actinide product radionuclides... actinides , and fission products in fallout. Doses from low-linear energy transfer (LET) radiation (beta particles and gamma rays) are reported separately...assumptions about the critical parameters used in calculating internal doses – resuspension factor, breathing rate, fractionation, and scenario elements – to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishii, Mamoru
The NEUP funded project, NEUP-3496, aims to experimentally investigate two-phase natural circulation flow instability that could occur in Small Modular Reactors (SMRs), especially for natural circulation SMRs. The objective has been achieved by systematically performing tests to study the general natural circulation instability characteristics and the natural circulation behavior under start-up or design basis accident conditions. Experimental data sets highlighting the effect of void reactivity feedback as well as the effect of power ramp-up rate and system pressure have been used to develop a comprehensive stability map. The safety analysis code, RELAP5, has been used to evaluate experimental results andmore » models. Improvements to the constitutive relations for flashing have been made in order to develop a reliable analysis tool. This research has been focusing on two generic SMR designs, i.e. a small modular Simplified Boiling Water Reactor (SBWR) like design and a small integral Pressurized Water Reactor (PWR) like design. A BWR-type natural circulation test facility was firstly built based on the three-level scaling analysis of the Purdue Novel Modular Reactor (NMR) with an electric output of 50 MWe, namely NMR-50, which represents a BWR-type SMR with a significantly reduced reactor pressure vessel (RPV) height. The experimental facility was installed with various equipment to measure thermalhydraulic parameters such as pressure, temperature, mass flow rate and void fraction. Characterization tests were performed before the startup transient tests and quasi-steady tests to determine the loop flow resistance. The control system and data acquisition system were programmed with LabVIEW to realize the realtime control and data storage. The thermal-hydraulic and nuclear coupled startup transients were performed to investigate the flow instabilities at low pressure and low power conditions for NMR-50. Two different power ramps were chosen to study the effect of startup power density on the flow instability. The experimental startup transient results showed the existence of three different flow instability mechanisms, i.e., flashing instability, condensation induced flow instability, and density wave oscillations. In addition, the void-reactivity feedback did not have significant effects on the flow instability during the startup transients for NMR-50. ii Several initial startup procedures with different power ramp rates were experimentally investigated to eliminate the flow instabilities observed from the startup transients. Particularly, the very slow startup transient and pressurized startup transient tests were performed and compared. It was found that the very slow startup transients by applying very small power density can eliminate the flashing oscillations in the single-phase natural circulation and stabilize the flow oscillations in the phase of net vapor generation. The initially pressurized startup procedure was tested to eliminate the flashing instability during the startup transients as well. The pressurized startup procedure included the initial pressurization, heat-up, and venting process. The startup transient tests showed that the pressurized startup procedure could eliminate the flow instability during the transition from single-phase flow to two-phase flow at low pressure conditions. The experimental results indicated that both startup procedures were applicable to the initial startup of NMR. However, the pressurized startup procedures might be preferred due to short operating hours required. In order to have a deeper understanding of natural circulation flow instability, the quasi-steady tests were performed using the test facility installed with preheater and subcooler. The effect of system pressure, core inlet subcooling, core power density, inlet flow resistance coefficient, and void reactivity feedback were investigated in the quasi-steady state tests. The experimental stability boundaries were determined between unstable and stable flow conditions in the dimensionless stability plane of inlet subcooling number and Zuber number. To predict the stability boundary theoretically, linear stability analysis in the frequency domain was performed at four sections of the natural circulation test loop. The flashing phenomena in the chimney section was considered as an axially uniform heat source. And the dimensionless characteristic equation of the pressure drop perturbation was obtained by considering the void fraction effect and outlet flow resistance in the core section. The theoretical flashing boundary showed some discrepancies with previous experimental data from the quasi-steady state tests. In the future, thermal non-equilibrium was recommended to improve the accuracy of flashing instability boundary. As another part of the funded research, flow instabilities of a PWR-type SMR under low pressure and low power conditions were investigated experimentally as well. The NuScale reactor design was selected as the prototype for the PWR-type SMR. In order to experimentally study the natural circulation behavior of NuScale iii reactor during accidental scenarios, detailed scaling analyses are necessary to ensure that the scaled phenomena could be obtained in a laboratory test facility. The three-level scaling method is used as well to obtain the scaling ratios derived from various non-dimensional numbers. The design of the ideally scaled facility (ISF) was initially accomplished based on these scaling ratios. Then the engineering scaled facility (ESF) was designed and constructed based on the ISF by considering engineering limitations including laboratory space, pipe size, and pipe connections etc. PWR-type SMR experiments were performed in this well-scaled test facility to investigate the potential thermal hydraulic flow instability during the blowdown events, which might occur during the loss of coolant accident (LOCA) and loss of heat sink accident (LOHS) of the prototype PWR-type SMR. Two kinds of experiments, normal blowdown event and cold blowdown event, were experimentally investigated and compared with code predictions. The normal blowdown event was experimentally simulated since an initial condition where the pressure was lower than the designed pressure of the experiment facility, while the code prediction of blowdown started from the normal operation condition. Important thermal hydraulic parameters including reactor pressure vessel (RPV) pressure, containment pressure, local void fraction and temperature, pressure drop and natural circulation flow rate were measured and analyzed during the blowdown event. The pressure and water level transients are similar to the experimental results published by NuScale [51], which proves the capability of current loop in simulating the thermal hydraulic transient of real PWR-type SMR. During the 20000s blowdown experiment, water level in the core was always above the active fuel assemble during the experiment and proved the safety of natural circulation cooling and water recycling design of PWR-type SMR. Besides, pressure, temperature, and water level transient can be accurately predicted by RELAP5 code. However, the oscillations of natural circulation flow rate, water level and pressure drops were observed during the blowdown transients. This kind of flow oscillations are related to the water level and the location upper plenum, which is a path for coolant flow from chimney to steam generator and down comer. In order to investigate the transients start from the opening of ADS valve in both experimental and numerical way, the cold blow-down experiment is conducted. For the cold blowdown event, different from setting both reactor iv pressure vessel (RPV) and containment at high temperature and pressure, only RPV was heated close to the highest designed pressure and then open the ADS valve, same process was predicted using RELAP5 code. By doing cold blowdown experiment, the entire transients from the opening of ADS can be investigated by code and benchmarked with experimental data. Similar flow instability observed in the cold blowdown experiment. The comparison between code prediction and experiment data showed that the RELAP5 code can successfully predict the pressure void fraction and temperature transient during the cold blowdown event with limited error, but numerical instability exists in predicting natural circulation flow rate. Besides, the code is lack of capability in predicting the water level related flow instability observed in experiments.« less
Exploring Accelerating Science Applications with FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storaasli, Olaf O; Strenski, Dave
2007-01-01
FPGA hardware and tools (VHDL, Viva, MitrionC and CHiMPS) are described. FPGA performance is evaluated on two Cray XD1 systems (Virtex-II Pro 50 and Virtex-4 LX160) for human genome (DNA and protein) sequence comparisons for a computational biology code (FASTA). Scalable FPGA speedups of 50X (Virtex-II) and 100X (Virtex-4) over a 2.2 GHz Opteron were achieved. Coding and IO issues faced for human genome data are described.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2010-01-01
Codes for predicting supersonic jet mixing and broadband shock-associated noise were assessed using a database containing noise measurements of a jet issuing from a convergent nozzle. Two types of codes were used to make predictions. Fast running codes containing empirical models were used to compute both the mixing noise component and the shock-associated noise component of the jet noise spectrum. One Reynolds-averaged, Navier-Stokes-based code was used to compute only the shock-associated noise. To enable the comparisons of the predicted component spectra with data, the measured total jet noise spectra were separated into mixing noise and shock-associated noise components. Comparisons were made for 1/3-octave spectra and some power spectral densities using data from jets operating at 24 conditions covering essentially 6 fully expanded Mach numbers with 4 total temperature ratios.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
MHD code using multi graphical processing units: SMAUG+
NASA Astrophysics Data System (ADS)
Gyenge, N.; Griffiths, M. K.; Erdélyi, R.
2018-01-01
This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.
NASA Astrophysics Data System (ADS)
Hirata, Akimasa; Masuda, Hiroshi; Kanai, Yuya; Asai, Ryuichi; Fujiwara, Osamu; Arima, Takuji; Kawai, Hiroki; Watanabe, Soichi; Lagroye, Isabelle; Veyret, Bernard
2011-12-01
The dominant effect of human exposures to microwaves is caused by temperature elevation ('thermal effect'). In the safety guidelines/standards, the specific absorption rate averaged over a specific volume is used as a metric for human protection from localized exposure. Further investigation on the use of this metric is required, especially in terms of thermophysiology. The World Health Organization (2006 RF research agenda) has given high priority to research into the extent and consequences of microwave-induced temperature elevation in children. In this study, an electromagnetic-thermal computational code was developed to model electromagnetic power absorption and resulting temperature elevation leading to changes in active blood flow in response to localized 1.457 GHz exposure in rat heads. Both juvenile (4 week old) and young adult (8 week old) rats were considered. The computational code was validated against measurements for 4 and 8 week old rats. Our computational results suggest that the blood flow rate depends on both brain and core temperature elevations. No significant difference was observed between thermophysiological responses in 4 and 8 week old rats under these exposure conditions. The computational model developed herein is thus applicable to set exposure conditions for rats in laboratory investigations, as well as in planning treatment protocols in the thermal therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montégiani, Jean-François; Gaudin, Émilie; Després, Philippe
2014-08-15
In peptide receptor radionuclide therapy (PRRT), huge inter-patient variability in absorbed radiation doses per administered activity mandates the utilization of individualized dosimetry to evaluate therapeutic efficacy and toxicity. We created a reliable GPU-calculated dosimetry code (irtGPUMCD) and assessed {sup 177}Lu-octreotate renal dosimetry in eight patients (4 cycles of approximately 7.4 GBq). irtGPUMCD was derived from a brachytherapy dosimetry code (bGPUMCD), which was adapted to {sup 177}Lu PRRT dosimetry. Serial quantitative single-photon emission computed tomography (SPECT) images were obtained from three SPECT/CT acquisitions performed at 4, 24 and 72 hours after {sup 177}Lu-octreotate administration, and registered with non-rigid deformation of CTmore » volumes, to obtain {sup 177}Lu-octreotate 4D quantitative biodistribution. Local energy deposition from the β disintegrations was assumed. Using Monte Carlo gamma photon transportation, irtGPUMCD computed dose rate at each time point. Average kidney absorbed dose was obtained from 1-cm{sup 3} VOI dose rate samples on each cortex, subjected to a biexponential curve fit. Integration of the latter time-dose rate curve yielded the renal absorbed dose. The mean renal dose per administered activity was 0.48 ± 0.13 Gy/GBq (range: 0.30–0.71 Gy/GBq). Comparison to another PRRT dosimetry code (VRAK: Voxelized Registration and Kinetics) showed fair accordance with irtGPUMCD (11.4 ± 6.8 %, range: 3.3–26.2%). These results suggest the possibility to use the irtGPUMCD code in order to personalize administered activity in PRRT. This could allow improving clinical outcomes by maximizing per-cycle tumor doses, without exceeding the tolerable renal dose.« less
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
Aerodynamic Interference Due to MSL Reaction Control System
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Schoenenberger, Mark; Scallion, William I.; VanNorman, John W.; Novak, Luke A.; Tang, Chun Y.
2009-01-01
An investigation of effectiveness of the reaction control system (RCS) of Mars Science Laboratory (MSL) entry capsule during atmospheric flight has been conducted. The reason for the investigation is that MSL is designed to fly a lifting actively guided entry with hypersonic bank maneuvers, therefore an understanding of RCS effectiveness is required. In the course of the study several jet configurations were evaluated using Langley Aerothermal Upwind Relaxation Algorithm (LAURA) code, Data Parallel Line Relaxation (DPLR) code, Fully Unstructured 3D (FUN3D) code and an Overset Grid Flowsolver (OVERFLOW) code. Computations indicated that some of the proposed configurations might induce aero-RCS interactions, sufficient to impede and even overwhelm the intended control torques. It was found that the maximum potential for aero-RCS interference exists around peak dynamic pressure along the trajectory. Present analysis largely relies on computational methods. Ground testing, flight data and computational analyses are required to fully understand the problem. At the time of this writing some experimental work spanning range of Mach number 2.5 through 4.5 has been completed and used to establish preliminary levels of confidence for computations. As a result of the present work a final RCS configuration has been designed such as to minimize aero-interference effects and it is a design baseline for MSL entry capsule.
Dupuis, S; Fecci, J-L; Noyer, P; Lecarpentier, E; Chollet-Xémard, C; Margenet, A; Marty, J; Combes, X
2009-01-01
To assess economical impact after introduction of a bar coding pharmacy stock replenishment system in a prehospital emergency medical unit. Observational before and after study. A computer system using specific software and bare-code technology was introduced in the pre hospital emergency medical unit (Smur). Overall activity and costs related to pharmacy were recorded annually during two periods: the first 2 years period before computer system introduction and the second one during the 4 years following this system installation. The overall clinical activity increased by 10% between the two periods whereas pharmacy related costs continuously decreased after the start of pharmacy management computer system use. Pharmacy stock management was easier after introduction of the new stock replenishment system. The mean pharmacy related cost of one patient management was 13 Euros before and 9 Euros after the introduction of the system. The overall cost savings during the studied period was calculated to reach 134,000 Euros. The introduction of a specific pharmacy management computer system allowed to do important costs savings in a prehospital emergency medical unit.
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2013-01-01
A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Eric M.
2004-05-20
The YAP software library computes (1) electromagnetic modes, (2) electrostatic fields, (3) magnetostatic fields and (4) particle trajectories in 2d and 3d models. The code employs finite element methods on unstructured grids of tetrahedral, hexahedral, prism and pyramid elements, with linear through cubic element shapes and basis functions to provide high accuracy. The novel particle tracker is robust, accurate and efficient, even on unstructured grids with discontinuous fields. This software library is a component of the MICHELLE 3d finite element gun code.
Applied Computational Electromagnetics Society Journal, Volume 9, Number 2
1994-07-01
input/output standardization; code or technique optimization and error minimization; innovations in solution technique or in data input/output...THE APPLIED COMPUTATIONAL ELECTROMAGNETICS SOCIETY JOURNAL EDITORS 3DITOR-IN-CH•IF/ACES EDITOR-IN-CHIEP/JOURNAL MANAGING EDITOR W. Perry Wheless...Adalbert Konrad and Paul P. Biringer Department of Electrical and Computer Engineering, University of Toronto Toronto, Ontario, CANADA M5S 1A4 Ailiwir
A prototype Knowledge-Based System to Aid Space System Restoration Management.
1986-12-01
Systems. ......... 122 Appendix B: Computation of Weights With AHP . . .. 132 Appendix C: ART Code .. ............... 138 Appendix D: Test Outputs...45 5.1 Earth Coverage With Geosynchronous Satellites 49 5.2 Space System Configurations ... ........... . 50 5.3 AHP Hierarchy...67 5.4 AHP Hierarchy With Weights .... ............ 68 6.1 TALK Schema Structure ..... .............. 75 6.2 ART Code for TALK Satellite C
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
NASA Technical Reports Server (NTRS)
Daw, Murray S.; Mills, Michael J.
2003-01-01
We report on the progress made during the first year of the project. Most of the progress at this point has been on the theoretical and computational side. Here are the highlights: (1) A new code, tailored for high-end desktop computing, now combines modern Accelerated Dynamics (AD) with the well-tested Embedded Atom Method (EAM); (2) The new Accelerated Dynamics allows the study of relatively slow, thermally-activated processes, such as diffusion, which are much too slow for traditional Molecular Dynamics; (3) We have benchmarked the new AD code on a rather simple and well-known process: vacancy diffusion in copper; and (4) We have begun application of the AD code to the diffusion of vacancies in ordered intermetallics.
Computer Description of Black Hawk Helicopter
1979-06-01
Model Combinatorial Geometry Models Black Hawk Helicopter Helicopter GIFT Computer Code Geometric Description of Targets 20. ABSTRACT...description was made using the technique of combinatorial geometry (COM-GEOM) and will be used as input to the GIFT computer code which generates Tliic...rnHp The data used bv the COVART comtmter code was eenerated bv the Geometric Information for Targets ( GIFT )Z computer code. This report documents
Multiloop Integral System Test (MIST): MIST Facility Functional Specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, T F; Koksal, C G; Moskal, T E
1991-04-01
The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST Functional Specification documents as-built design features, dimensions, instrumentation, and test approach. It also presents the scaling basis for the facility and serves to define the scope of work for the facility design and construction. 13 refs., 112 figs., 38 tabs.« less
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
Pretest analysis document for Test S-NH-2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Streit, J.E.; Owca, W.A.
This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY3601 code for Semiscale MOD-2C Test S-NH-2. The test will simulate the transient that results from the shear in a small diameter penetration of a cold leg, equivalent to 2.1% of the cold leg flow area. The high pressure injection system is assumed to be inoperative throughout the transient. The recovery procedure consists of latching open both steam generator atmospheric dump valves, supplying both steam generators with auxiliary feedwater system is assumed to be partially inoperative so the auxiliary feedwater flow is degraded. Recovery will be initiated upon a peakmore » cladding temperature of 811/sup 0/K (1000/sup 0/F). The test will be terminated when primary pressure has been reduced to the low pressure injection system setpoint of 1.38 MPa (200 psia). The calculated results indicate that the test objectives can be achieved and the proposed test scenario poses no threat to personnel or to plant integrity. 7 refs., 16 figs., 2 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae
A reliability testing was performed for the software of Shutdown(SDS) Computers for Wolsong Nuclear Power Plants Units 2, 3 and 4. profiles to the SDS Computers and compared the outputs with the predicted results generated by the oracle. Test softwares were written to execute the test automatically. Random test profiles were generated using analysis code. 11 refs., 1 fig.
A Computer-Assisted Nutrition Education Unit for Grades 4-6.
ERIC Educational Resources Information Center
Hills, Alvina M.
1983-01-01
A computer-assisted instructional unit (written for 32K Commodore PET microcomputer) was developed to identify four food groups outlined in Canada's Food Guide, place specific foods in correct groups, and identify food not belonging to the four groups. Animated color-coded keys are used to represent the food groups. (JN)
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information necessary to use the LOVES computer program in its existing state or to modify the program to include studies not properly handled by the basic model is provided. A users guide, a programmers manual, and several supporting appendices are included.
Simulation of the Reflected Blast Wave froma C-4 Charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, W M; Kuhl, A L; Tringe, J W
2011-08-01
The reflection of a blast wave from a C4 charge detonated above a planar surface is simulated with our ALE3D code. We used a finely-resolved, fixed Eulerian 2-D mesh (167 {micro}m per cell) to capture the detonation of the charge, the blast wave propagation in nitrogen, and its reflection from the surface. The thermodynamic properties of the detonation products and nitrogen were specified by the Cheetah code. A programmed-burn model was used to detonate the charge at a rate based on measured detonation velocities. Computed pressure histories are compared with pressures measured by Kistler 603B piezoelectric gauges at 8 rangesmore » (GR = 0, 2, 4, 8, 10, and 12 inches) along the reflecting surface. Computed and measured waveforms and positive-phase impulses were similar, except at close-in ranges (GR < 2 inches), which were dominated by jetting effects.« less
Simulation of the reflected blast wave from a C-4 charge
NASA Astrophysics Data System (ADS)
Howard, W. Michael; Kuhl, Allen L.; Tringe, Joseph
2012-03-01
The reflection of a blast wave from a C4 charge detonated above a planar surface is simulated with our ALE3D code. We used a finely-resolved, fixed Eulerian 2-D mesh (167 μm per cell) to capture the detonation of the charge, the blast wave propagation in nitrogen, and its reflection from the surface. The thermodynamic properties of the detonation products and nitrogen were specified by the Cheetah code. A programmed-burn model was used to detonate the charge at a rate based on measured detonation velocities. Computed pressure histories are compared with pressures measured by Kistler 603B piezoelectric gauges at 7 ranges (GR = 0, 5.08, 10.16, 15.24, 20.32, 25.4, and 30.48 cm) along the reflecting surface. Computed and measured waveforms and positive-phase impulses were similar, except at close-in ranges (GR < 5 cm), which were dominated by jetting effects.
Atmospheric Transmittance/Radiance: Computer Code LOWTRAN 5
1980-02-21
D. 0. 0 4 DNODD *• : 000.1 • 10 -7o. S0 C O EL4 ? • 4 ....5. 40...... ..... 4..........*............ ........ S....... * RMODEL.7 * 0. 071 4’. 2...while the receiver was a Golay cell mounted at the focus of a 76-cm diameter 80. Arnold, D.H., Lake, D. B., and Sanders, R. (1970) Comparative Measui
Computational simulation of acoustic fatigue for hot composite structures
NASA Technical Reports Server (NTRS)
Singhal, S. N.; Nagpal, V. K.; Murthy, P. L. N.; Chamis, C. C.
1991-01-01
This paper presents predictive methods/codes for computational simulation of acoustic fatigue resistance of hot composite structures subjected to acoustic excitation emanating from an adjacent vibrating component. Select codes developed over the past two decades at the NASA Lewis Research Center are used. The codes include computation of (1) acoustic noise generated from a vibrating component, (2) degradation in material properties of the composite laminate at use temperature, (3) dynamic response of acoustically excited hot multilayered composite structure, (4) degradation in the first-ply strength of the excited structure due to acoustic loading, and (5) acoustic fatigue resistance of the excited structure, including propulsion environment. Effects of the laminate lay-up and environment on the acoustic fatigue life are evaluated. The results show that, by keeping the angled plies on the outer surface of the laminate, a substantial increase in the acoustic fatigue life is obtained. The effect of environment (temperature and moisure) is to relieve the residual stresses leading to an increase in the acoustic fatigue life of the excited panel.
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Naifeh, Karen; Thrasher, Chet
1988-01-01
This report contains the source code and documentation for a computer program used to process impedance cardiography data. The cardiodynamic measures derived from impedance cardiography are ventricular stroke column, cardiac output, cardiac index and Heather index. The program digitizes data collected from the Minnesota Impedance Cardiograph, Electrocardiography (ECG), and respiratory cycles and then stores these data on hard disk. It computes the cardiodynamic functions using interactive graphics and stores the means and standard deviations of each 15-sec data epoch on floppy disk. This software was designed on a Digital PRO380 microcomputer and used version 2.0 of P/OS, with (minimally) a 4-channel 16-bit analog/digital (A/D) converter. Applications software is written in FORTRAN 77, and uses Digital's Pro-Tool Kit Real Time Interface Library, CORE Graphic Library, and laboratory routines. Source code can be readily modified to accommodate alternative detection, A/D conversion and interactive graphics. The object code utilizing overlays and multitasking has a maximum of 50 Kbytes.
Improved Boundary Layer Module (BLM) for the Solid Performance Program (SPP)
NASA Astrophysics Data System (ADS)
Coats, D. E.; Cebeci, T.
1982-03-01
The requirements for a replacement to the Bartz boundary layer code, the standard method of computing the performance loss due to viscous effects by the solid performance program, were discussed by the propulsion community along with four nationally recognized boundary layer experts. A consensus was reached regarding the preferred features for the analysis of the replacement code. The major points that were agreed upon are: (1) finite difference methods are preferred over integral methods; (2) a single equation eddy viscosity model was considered to be adequate for the purpose of computing performance loss; (3) a variable grid capability in both coordinate directions would be required; (4) a proven finite difference algorithm which is not stability restricted should be used, that is, an implicit numerical scheme would be required; and (5) the replacement code should be able to compute both turbulent and laminar flows. The program should treat mass addition at the wall as well as being able to calculate a stagnation point starting line.
User manual for semi-circular compact range reflector code: Version 2
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1987-01-01
A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
NASA Technical Reports Server (NTRS)
Martini, W. R.
1981-01-01
A series of computer programs are presented with full documentation which simulate the transient behavior of a modern 4 cylinder Siemens arrangement Stirling engine with burner and air preheater. Cold start, cranking, idling, acceleration through 3 gear changes and steady speed operation are simulated. Sample results and complete operating instructions are given. A full source code listing of all programs are included.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Inclusion of pressure and flow in a new 3D MHD equilibrium code
NASA Astrophysics Data System (ADS)
Raburn, Daniel; Fukuyama, Atsushi
2012-10-01
Flow and nonsymmetric effects can play a large role in plasma equilibria and energy confinement. A concept for such a 3D equilibrium code was developed and presented in 2011. The code is called the Kyoto ITerative Equilibrium Solver (KITES) [1], and the concept is based largely on the PIES code [2]. More recently, the work-in-progress KITES code was used to calculate force-free equilibria. Here, progress and results on the inclusion of pressure and flow in the code are presented. [4pt] [1] Daniel Raburn and Atsushi Fukuyama, Plasma and Fusion Research: Regular Articles, 7:240381 (2012).[0pt] [2] H. S. Greenside, A. H. Reiman, and A. Salas, J. Comput. Phys, 81(1):102-136 (1989).
Bayesian network representing system dynamics in risk analysis of nuclear systems
NASA Astrophysics Data System (ADS)
Varuttamaseni, Athi
2011-12-01
A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.
NASA Technical Reports Server (NTRS)
1973-01-01
The logistics of orbital vehicle servicing computer specifications was developed and a number of alternatives to improve utilization of the space shuttle and the tug were investigated. Preliminary results indicate that space servicing offers a potential for reducing future operational and program costs over ground refurbishment of satellites. A computer code which could be developed to simulate space servicing is presented.
User's manual for semi-circular compact range reflector code
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1986-01-01
A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toussaint, Doug
2014-03-21
The Arizona component of the SciDAC-3 Lattice Gauge Theory program consisted of partial support for a postdoctoral position. In the original budget this covered three fourths of a postdoc, but the University of Arizona changed its ERE rate for postdoctoral positions from 4.3% to 21%, so the support level was closer to two-thirds of a postdoc. The grant covered the work of postdoc Thomas Primer. Dr. Primer's first task was an urgent one, although it was not forseen in our proposed work. It turned out that on the large lattices used in some of our current computations the gauge fixingmore » code was not working as expected, and this revealed itself in inconsistent results in the correlators needed to compute the semileptonic form factors for K and D decays. Dr. Primer participated in the effort to understand this problem and to modify our codes to deal with the large lattices we are now generating (as large as 144 3 x 288). Corrected code was incorporated in our standard codes, and workarounds that allow us to use the correlators already computed with the unexpected gauge fixing were been implemented.« less
LOFT L2-3 blowdown experiment safety analyses D, E, and G; LOCA analyses H, K, K1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perryman, J.L.; Keeler, C.D.; Saukkoriipi, L.O.
1978-12-01
Three calculations using conservative off-nominal conditions and evaluation model options were made using RELAP4/MOD5 for blowdown-refill and RELAP4/MOD6 for reflood for Loss-of-Fluid Test Experiment L2-3 to support the experiment safety analysis effort. The three analyses are as follows: Analysis D: Loss of commercial power during Experiment L2-3; Analysis E: Hot leg quick-opening blowdown valve (QOBV) does not open during Experiment L2-3; and Analysis G: Cold leg QOBV does not open during Experiment L2-3. In addition, the results of three LOFT loss-of-coolant accident (LOCA) analyses using a power of 56.1 MW and a primary coolant system flow rate of 3.6 millionmore » 1bm/hr are presented: Analysis H: Intact loop 200% hot leg break; emergency core cooling (ECC) system B unavailable; Analysis K: Pressurizer relief valve stuck in open position; ECC system B unavailable; and Analysis K1: Same as analysis K, but using a primary coolant system flow rate of 1.92 million 1bm/hr (L2-4 pre-LOCE flow rate). For analysis D, the maximum cladding temperature reached was 1762/sup 0/F, 22 sec into reflood. In analyses E and G, the blowdowns were slower due to one of the QOBVs not functioning. The maximum cladding temperature reached in analysis E was 1700/sup 0/F, 64.7 sec into reflood; for analysis G, it was 1300/sup 0/F at the start of reflood. For analysis H, the maximum cladding temperature reached was 1825/sup 0/F, 0.01 sec into reflood. Analysis K was a very slow blowdown, and the cladding temperatures followed the saturation temperature of the system. The results of analysis K1 was nearly identical to analysis K; system depressurization was not affected by the primary coolant system flow rate.« less
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Data Parallel Line Relaxation (DPLR) Code User Manual: Acadia - Version 4.01.1
NASA Technical Reports Server (NTRS)
Wright, Michael J.; White, Todd; Mangini, Nancy
2009-01-01
Data-Parallel Line Relaxation (DPLR) code is a computational fluid dynamic (CFD) solver that was developed at NASA Ames Research Center to help mission support teams generate high-value predictive solutions for hypersonic flow field problems. The DPLR Code Package is an MPI-based, parallel, full three-dimensional Navier-Stokes CFD solver with generalized models for finite-rate reaction kinetics, thermal and chemical non-equilibrium, accurate high-temperature transport coefficients, and ionized flow physics incorporated into the code. DPLR also includes a large selection of generalized realistic surface boundary conditions and links to enable loose coupling with external thermal protection system (TPS) material response and shock layer radiation codes.
A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.
1989-01-01
A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Verbeke, J. M.; Petit, O.
2016-06-01
From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less
Comprehensive Micromechanics-Analysis Code - Version 4.0
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.
2005-01-01
Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.
Nonlinear 3D MHD verification study: SpeCyl and PIXIE3D codes for RFP and Tokamak plasmas
NASA Astrophysics Data System (ADS)
Bonfiglio, D.; Cappello, S.; Chacon, L.
2010-11-01
A strong emphasis is presently placed in the fusion community on reaching predictive capability of computational models. An essential requirement of such endeavor is the process of assessing the mathematical correctness of computational tools, termed verification [1]. We present here a successful nonlinear cross-benchmark verification study between the 3D nonlinear MHD codes SpeCyl [2] and PIXIE3D [3]. Excellent quantitative agreement is obtained in both 2D and 3D nonlinear visco-resistive dynamics for reversed-field pinch (RFP) and tokamak configurations [4]. RFP dynamics, in particular, lends itself as an ideal non trivial test-bed for 3D nonlinear verification. Perspectives for future application of the fully-implicit parallel code PIXIE3D to RFP physics, in particular to address open issues on RFP helical self-organization, will be provided. [4pt] [1] M. Greenwald, Phys. Plasmas 17, 058101 (2010) [0pt] [2] S. Cappello and D. Biskamp, Nucl. Fusion 36, 571 (1996) [0pt] [3] L. Chac'on, Phys. Plasmas 15, 056103 (2008) [0pt] [4] D. Bonfiglio, L. Chac'on and S. Cappello, Phys. Plasmas 17 (2010)
Language-Agnostic Reproducible Data Analysis Using Literate Programming.
Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa
2016-01-01
A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir.
Language-Agnostic Reproducible Data Analysis Using Literate Programming
Vassilev, Boris; Louhimo, Riku; Ikonen, Elina; Hautaniemi, Sampsa
2016-01-01
A modern biomedical research project can easily contain hundreds of analysis steps and lack of reproducibility of the analyses has been recognized as a severe issue. While thorough documentation enables reproducibility, the number of analysis programs used can be so large that in reality reproducibility cannot be easily achieved. Literate programming is an approach to present computer programs to human readers. The code is rearranged to follow the logic of the program, and to explain that logic in a natural language. The code executed by the computer is extracted from the literate source code. As such, literate programming is an ideal formalism for systematizing analysis steps in biomedical research. We have developed the reproducible computing tool Lir (literate, reproducible computing) that allows a tool-agnostic approach to biomedical data analysis. We demonstrate the utility of Lir by applying it to a case study. Our aim was to investigate the role of endosomal trafficking regulators to the progression of breast cancer. In this analysis, a variety of tools were combined to interpret the available data: a relational database, standard command-line tools, and a statistical computing environment. The analysis revealed that the lipid transport related genes LAPTM4B and NDRG1 are coamplified in breast cancer patients, and identified genes potentially cooperating with LAPTM4B in breast cancer progression. Our case study demonstrates that with Lir, an array of tools can be combined in the same data analysis to improve efficiency, reproducibility, and ease of understanding. Lir is an open-source software available at github.com/borisvassilev/lir. PMID:27711123
The NJOY Nuclear Data Processing System, Version 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.
The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-01-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Computational strategies for three-dimensional flow simulations on distributed computer systems
NASA Astrophysics Data System (ADS)
Sankar, Lakshmi N.; Weed, Richard A.
1995-08-01
This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
"Hour of Code": Can It Change Students' Attitudes toward Programming?
ERIC Educational Resources Information Center
Du, Jie; Wimmer, Hayden; Rada, Roy
2016-01-01
The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…
DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James; Bayless, Paul; Strydom, Gerhard
2016-11-01
Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less
Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.
Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less
Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion
Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.
2018-03-20
Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less
HELIOS-R: An Ultrafast, Open-Source Retrieval Code For Exoplanetary Atmosphere Characterization
NASA Astrophysics Data System (ADS)
LAVIE, Baptiste
2015-12-01
Atmospheric retrieval is a growing, new approach in the theory of exoplanet atmosphere characterization. Unlike self-consistent modeling it allows us to fully explore the parameter space, as well as the degeneracies between the parameters using a Bayesian framework. We present HELIOS-R, a very fast retrieving code written in Python and optimized for GPU computation. Once it is ready, HELIOS-R will be the first open-source atmospheric retrieval code accessible to the exoplanet community. As the new generation of direct imaging instruments (SPHERE, GPI) have started to gather data, the first version of HELIOS-R focuses on emission spectra. We use a 1D two-stream forward model for computing fluxes and couple it to an analytical temperature-pressure profile that is constructed to be in radiative equilibrium. We use our ultra-fast opacity calculator HELIOS-K (also open-source) to compute the opacities of CO2, H2O, CO and CH4 from the HITEMP database. We test both opacity sampling (which is typically used by other workers) and the method of k-distributions. Using this setup, we compute a grid of synthetic spectra and temperature-pressure profiles, which is then explored using a nested sampling algorithm. By focusing on model selection (Occam’s razor) through the explicit computation of the Bayesian evidence, nested sampling allows us to deal with current sparse data as well as upcoming high-resolution observations. Once the best model is selected, HELIOS-R provides posterior distributions of the parameters. As a test for our code we studied HR8799 system and compared our results with the previous analysis of Lee, Heng & Irwin (2013), which used the proprietary NEMESIS retrieval code. HELIOS-R and HELIOS-K are part of the set of open-source community codes we named the Exoclimes Simulation Platform (www.exoclime.org).
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
Monte Carlo MCNP-4B-based absorbed dose distribution estimates for patient-specific dosimetry.
Yoriyaz, H; Stabin, M G; dos Santos, A
2001-04-01
This study was intended to verify the capability of the Monte Carlo MCNP-4B code to evaluate spatial dose distribution based on information gathered from CT or SPECT. A new three-dimensional (3D) dose calculation approach for internal emitter use in radioimmunotherapy (RIT) was developed using the Monte Carlo MCNP-4B code as the photon and electron transport engine. It was shown that the MCNP-4B computer code can be used with voxel-based anatomic and physiologic data to provide 3D dose distributions. This study showed that the MCNP-4B code can be used to develop a treatment planning system that will provide such information in a time manner, if dose reporting is suitably optimized. If each organ is divided into small regions where the average energy deposition is calculated with a typical volume of 0.4 cm(3), regional dose distributions can be provided with reasonable central processing unit times (on the order of 12-24 h on a 200-MHz personal computer or modest workstation). Further efforts to provide semiautomated region identification (segmentation) and improvement of marrow dose calculations are needed to supply a complete system for RIT. It is envisioned that all such efforts will continue to develop and that internal dose calculations may soon be brought to a similar level of accuracy, detail, and robustness as is commonly expected in external dose treatment planning. For this study we developed a code with a user-friendly interface that works on several nuclear medicine imaging platforms and provides timely patient-specific dose information to the physician and medical physicist. Future therapy with internal emitters should use a 3D dose calculation approach, which represents a significant advance over dose information provided by the standard geometric phantoms used for more than 20 y (which permit reporting of only average organ doses for certain standardized individuals)
HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics
NASA Astrophysics Data System (ADS)
Wiebusch, Martin
2015-10-01
This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
NASA Astrophysics Data System (ADS)
Exby, J.; Busby, R.; Dimitrov, D. A.; Bruhwiler, D.; Cary, J. R.
2003-10-01
We present our design and initial implementation of a web service model for running particle-in-cell (PIC) codes remotely from a web browser interface. PIC codes have grown significantly in complexity and now often require parallel execution on multiprocessor computers, which in turn requires sophisticated post-processing and data analysis. A significant amount of time and effort is required for a physicist to develop all the necessary skills, at the expense of actually doing research. Moreover, parameter studies with a computationally intensive code justify the systematic management of results with an efficient way to communicate them among a group of remotely located collaborators. Our initial implementation uses the OOPIC Pro code [1], Linux, Apache, MySQL, Python, and PHP. The Interactive Data Language is used for visualization. [1] D.L. Bruhwiler et al., Phys. Rev. ST-AB 4, 101302 (2001). * This work is supported by DOE grant # DE-FG02-03ER83857 and by Tech-X Corp. ** Also University of Colorado.
NASA Astrophysics Data System (ADS)
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
Enhanced fault-tolerant quantum computing in d-level systems.
Campbell, Earl T
2014-12-05
Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.
Nonuniform code concatenation for universal fault-tolerant quantum computing
NASA Astrophysics Data System (ADS)
Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza
2017-09-01
Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.
1986-09-30
4 . ~**..ft.. ft . - - - ft SI TABLES 9 I. SA32~40 Single Event Upset Test, 1140-MeV Krypton, 9/l8/8~4. . .. .. .. .. .. .16 II. CRUP Simulation...cosmic ray interaction analysis described in the remainder of this report were calculated using the CRUP computer code 3 modified for funneling. The... CRUP code requires, as inputs, the size of a depletion region specified as a retangular parallel piped with dimensions a 9 b S c, the effective funnel
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
High Temperature Composite Analyzer (HITCAN) demonstration manual, version 1.0
NASA Technical Reports Server (NTRS)
Singhal, S. N; Lackney, J. J.; Murthy, P. L. N.
1993-01-01
This manual comprises a variety of demonstration cases for the HITCAN (HIgh Temperature Composite ANalyzer) code. HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. HITCAN is written in FORTRAN 77 computer language and has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. Detailed description of all program variables and terms used in this manual may be found in the User's Manual. The demonstration includes various cases to illustrate the features and analysis capabilities of the HITCAN computer code. These cases include: (1) static analysis, (2) nonlinear quasi-static (incremental) analysis, (3) modal analysis, (4) buckling analysis, (5) fiber degradation effects, (6) fabrication-induced stresses for a variety of structures; namely, beam, plate, ring, shell, and built-up structures. A brief discussion of each demonstration case with the associated input data file is provided. Sample results taken from the actual computer output are also included.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.
1982-01-01
A user's manual describing the operation of three computer codes (ADD code, PTRAK code, and VAPDIF code) is presented. The general features of the computer codes, the input/output formats, run streams, and sample input cases are described.
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Section Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of new computer codes that permit more accurate and efficient three-dimensional analysis of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components. This report is presented in two volumes. Volume 1 describes effort performed under Task 4B, Special Finite Element Special Function Models, while Volume 2 concentrates on Task 4C, Advanced Special Functions Models.
f1: a code to compute Appell's F1 hypergeometric function
NASA Astrophysics Data System (ADS)
Colavecchia, F. D.; Gasaneo, G.
2004-02-01
In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.
Probabilistic Analysis of Aircraft Gas Turbine Disk Life and Reliability
NASA Technical Reports Server (NTRS)
Melis, Matthew E.; Zaretsky, Erwin V.; August, Richard
1999-01-01
Two series of low cycle fatigue (LCF) test data for two groups of different aircraft gas turbine engine compressor disk geometries were reanalyzed and compared using Weibull statistics. Both groups of disks were manufactured from titanium (Ti-6Al-4V) alloy. A NASA Glenn Research Center developed probabilistic computer code Probable Cause was used to predict disk life and reliability. A material-life factor A was determined for titanium (Ti-6Al-4V) alloy based upon fatigue disk data and successfully applied to predict the life of the disks as a function of speed. A comparison was made with the currently used life prediction method based upon crack growth rate. Applying an endurance limit to the computer code did not significantly affect the predicted lives under engine operating conditions. Failure location prediction correlates with those experimentally observed in the LCF tests. A reasonable correlation was obtained between the predicted disk lives using the Probable Cause code and a modified crack growth method for life prediction. Both methods slightly overpredict life for one disk group and significantly under predict it for the other.
Automated apparatus and method of generating native code for a stitching machine
NASA Technical Reports Server (NTRS)
Miller, Jeffrey L. (Inventor)
2000-01-01
A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.
1983-09-01
F.P. PX /AMPZIJ/ REFH /AMPZIJ/ REFV /AI4PZIJ/ * RHOX /AI4PZIJ/ RHOY /At4PZIJ/ RHOZ /AI4PZIJ/ S A-ZJ SA /AMPZIJ/ SALP /AMPZIJ/ 6. CALLING ROUTINE: FLDDRV...US3NG ALGORITHM 72 COMPUTE P- YES .~:*:.~~ USING* *. 1. NAME: PLAINT (GTD) ] 2. PURPOSE: To determine if a ray traveling from a given source loca...determine if a source ray reflection from plate MP occurs. If a ray traveling from the source image location in the reflected ray direction passes through
Aerodynamic Analysis of a Canard Missile Configuration using ANSYS-CFX
2011-12-01
OF A CANARD MISSILE CONFIGURATION USING ANSYS - CFX by Hong Chuan Wee December 2011 Thesis Advisor: Maximilian Platzer Second Reader...DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE Aerodynamic Analysis of a Canard Missile Configuration using ANSYS - CFX 5. FUNDING NUMBERS 6...distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This study used the Computational Fluid Dynamics code, ANSYS - CFX to
Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.
Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao
2011-12-01
In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru
2010-12-15
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less
NASA Astrophysics Data System (ADS)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-01
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
Campbell, J R; Carpenter, P; Sneiderman, C; Cohn, S; Chute, C G; Warren, J
1997-01-01
To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for "parent" and "child" codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p < .00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56, UMLS 3.17; READ 2.14, *p < .005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p < .00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p < .004) associated with a loss of clarity. No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. Is suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record.
Users manual and modeling improvements for axial turbine design and performance computer code TD2-2
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
Aerodynamic data banks for Clark-Y, NACA 4-digit and NACA 16-series airfoil families
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Camba, J., III; Morris, P. M.
1986-01-01
With the renewed interest in propellers as means of obtaining thrust and fuel efficiency in addition to the increased utilization of the computer, a significant amount of progress was made in the development of theoretical models to predict the performance of propeller systems. Inherent in the majority of the theoretical performance models to date is the need for airfoil data banks which provide lift, drag, and moment coefficient values as a function of Mach number, angle-of-attack, maximum thickness to chord ratio, and Reynolds number. Realizing the need for such data, a study was initiated to provide airfoil data banks for three commonly used airfoil families in propeller design and analysis. The families chosen consisted of the Clark-Y, NACA 16 series, and NACA 4 digit series airfoils. The various component of each computer code, the source of the data used to create the airfoil data bank, the limitations of each data bank, program listing, and a sample case with its associated input-output are described. Each airfoil data bank computer code was written to be used on the Amdahl Computer system, which is IBM compatible and uses Fortran.
NASA Technical Reports Server (NTRS)
Harper, Warren
1989-01-01
Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.
User interfaces for computational science: A domain specific language for OOMMF embedded in Python
NASA Astrophysics Data System (ADS)
Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans
2017-05-01
Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.
Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob
2003-01-01
The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.
Phase II Evaluation of Clinical Coding Schemes
Campbell, James R.; Carpenter, Paul; Sneiderman, Charles; Cohn, Simon; Chute, Christopher G.; Warren, Judith
1997-01-01
Abstract Objective: To compare three potential sources of controlled clinical terminology (READ codes version 3.1, SNOMED International, and Unified Medical Language System (UMLS) version 1.6) relative to attributes of completeness, clinical taxonomy, administrative mapping, term definitions and clarity (duplicate coding rate). Methods: The authors assembled 1929 source concept records from a variety of clinical information taken from four medical centers across the United States. The source data included medical as well as ample nursing terminology. The source records were coded in each scheme by an investigator and checked by the coding scheme owner. The codings were then scored by an independent panel of clinicians for acceptability. Codes were checked for definitions provided with the scheme. Codes for a random sample of source records were analyzed by an investigator for “parent” and “child” codes within the scheme. Parent and child pairs were scored by an independent panel of medical informatics specialists for clinical acceptability. Administrative and billing code mapping from the published scheme were reviewed for all coded records and analyzed by independent reviewers for accuracy. The investigator for each scheme exhaustively searched a sample of coded records for duplications. Results: SNOMED was judged to be significantly more complete in coding the source material than the other schemes (SNOMED* 70%; READ 57%; UMLS 50%; *p <.00001). SNOMED also had a richer clinical taxonomy judged by the number of acceptable first-degree relatives per coded concept (SNOMED* 4.56; UMLS 3.17; READ 2.14, *p <.005). Only the UMLS provided any definitions; these were found for 49% of records which had a coding assignment. READ and UMLS had better administrative mappings (composite score: READ* 40.6%; UMLS* 36.1%; SNOMED 20.7%, *p <. 00001), and SNOMED had substantially more duplications of coding assignments (duplication rate: READ 0%; UMLS 4.2%; SNOMED* 13.9%, *p <. 004) associated with a loss of clarity. Conclusion: No major terminology source can lay claim to being the ideal resource for a computer-based patient record. However, based upon this analysis of releases for April 1995, SNOMED International is considerably more complete, has a compositional nature and a richer taxonomy. It suffers from less clarity, resulting from a lack of syntax and evolutionary changes in its coding scheme. READ has greater clarity and better mapping to administrative schemes (ICD-10 and OPCS-4), is rapidly changing and is less complete. UMLS is a rich lexical resource, with mappings to many source vocabularies. It provides definitions for many of its terms. However, due to the varying granularities and purposes of its source schemes, it has limitations for representation of clinical concepts within a computer-based patient record. PMID:9147343
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
Computation of the tip vortex flowfield for advanced aircraft propellers
NASA Technical Reports Server (NTRS)
Tsai, Tommy M.; Dejong, Frederick J.; Levy, Ralph
1988-01-01
The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).
Research in Parallel Algorithms and Software for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Domel, Neal D.
1996-01-01
Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
The investigation of tethered satellite system dynamics
NASA Technical Reports Server (NTRS)
Lorenzini, E. C.
1986-01-01
The analysis of the rotational dynamics of the satellite was focused on the rotational amplitude increase of the satellite, with respect to the tether, during retrieval. The dependence of the rotational amplitude upon the tether tension variation to the power 1/4 was thoroughly investigated. The damping of rotational oscillations achievable by reel control was also quantified while an alternative solution that makes use of a lever arm attached with a universal joint to the satellite was proposed. Comparison simulations between the Smithsonian Astrophysical Observatory and the Martin Marietta (MMA) computer code of reteival maneuvers were also carried out. The agreement between the two, completely independent, codes was extremely close, demonstrating the reliability of the models. The slack tether dynamics during reel jams was analytically investigated in order to identify the limits of applicability of the SLACK3 computer code to this particular case. Test runs with SLACK3 were also carried out.
VizieR Online Data Catalog: Habitable zone code (Valle+, 2014)
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2014-06-01
A C computation code that provide in output the distance dm (i for which the duration of habitability is longest, the corresponding duration tm (in Gyr), the width W (in AU) of the zone for which the habitability lasts tm/2, the inner (Ri) and outer (Ro) boundaries of the 4Gyr continuously habitable zone. The code read the input file HZ-input.dat, containing in each row the mass of the host star (range: 0.70-1.10M⊙), its metallicity (either Z (range: 0.005-0.004) or [Fe/H]), the helium-to-metal enrichment ratio (range: 1-3, standard value = 2), the equilibrium temperature for habitable zone outer boundary computation (range: 169-203K) and the planet Bond Albedo (range: 0.0-1.0, Earth = 0.3). The output is printed on-screen. Compilation: just use your favorite C compiler: gcc hz.c -lm -o HZ (2 data files).
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
1990-02-23
TABQ Lnannounoed 0 Juastification Distri~bution/ 23 FeruaryI~()Availability Codes vai1 and/or Dist Sioa WA LAI 1r.:. 1.’ 1 0 YF I~. 9’. t l ni 1 nn N...5. 4 I S A 1A,1, v 1o.2 4 -2.9 1:3. 1 3 142 N 100 WI ACIIJA|, I 2.3 t 8.2 Ac -5.9 * A ISAIA , 103.7 * 8.2 A 8.5 1 4 42 N 98 W t ACTUAL, 4 4. 0 C 17...parameter at grid points, consideration of the error produced by the objective analysis scheme is necessary. The computer code used for the Cressman (1959
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
Computational experience with a three-dimensional rotary engine combustion model
NASA Astrophysics Data System (ADS)
Raju, M. S.; Willis, E. A.
1990-04-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
Computational experience with a three-dimensional rotary engine combustion model
NASA Technical Reports Server (NTRS)
Raju, M. S.; Willis, E. A.
1990-01-01
A new computer code was developed to analyze the chemically reactive flow and spray combustion processes occurring inside a stratified-charge rotary engine. Mathematical and numerical details of the new code were recently described by the present authors. The results are presented of limited, initial computational trials as a first step in a long-term assessment/validation process. The engine configuration studied was chosen to approximate existing rotary engine flow visualization and hot firing test rigs. Typical results include: (1) pressure and temperature histories, (2) torque generated by the nonuniform pressure distribution within the chamber, (3) energy release rates, and (4) various flow-related phenomena. These are discussed and compared with other predictions reported in the literature. The adequacy or need for improvement in the spray/combustion models and the need for incorporating an appropriate turbulence model are also discussed.
Web Services Provide Access to SCEC Scientific Research Application Software
NASA Astrophysics Data System (ADS)
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
2003-12-01
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.
Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.
1999-09-01
A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less
Goddard Visiting Scientist Program
NASA Technical Reports Server (NTRS)
2000-01-01
Under this Indefinite Delivery Indefinite Quantity (IDIQ) contract, USRA was expected to provide short term (from I day up to I year) personnel as required to provide a Visiting Scientists Program to support the Earth Sciences Directorate (Code 900) at the Goddard Space Flight Center. The Contractor was to have a pool, or have access to a pool, of scientific talent, both domestic and international, at all levels (graduate student to senior scientist), that would support the technical requirements of the following laboratories and divisions within Code 900: 1) Global Change Data Center (902); 2) Laboratory for Atmospheres (Code 910); 3) Laboratory for Terrestrial Physics (Code 920); 4) Space Data and Computing Division (Code 930); 5) Laboratory for Hydrospheric Processes (Code 970). The research activities described below for each organization within Code 900 were intended to comprise the general scope of effort covered under the Visiting Scientist Program.
Applied Computational Transonic Aerodynamics,
1982-08-01
contributions. Considering first the body integral (2.95) we now have the situation that, with the effect of the boundary layer represented, e.g. through... effects , (3) static aeroelastic distortion, (4) up to three interfering bodies of nacelle or store type, and (5) an improved method of treating...tip. To date, no modeling of nacelle or store pylons has been included in this code. In the NLR code [641, the effect of (finite) bodies and wing
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2001-01-01
The purpose of this report was to analyze the heat-transfer problem posed by the determination of spacecraft temperatures and to incorporate the theoretically derived relationships in the computational code TSCALC. The basis for the code was a theoretical analysis of the thermal radiative equilibrium in space, particularly in the Solar System. Beginning with the solar luminosity, the code takes into account these key variables: (1) the spacecraft-to-Sun distance expressed in astronomical units (AU), where 1 AU represents the average Sun-to-Earth distance of 149.6 million km; (2) the angle (arc degrees) at which solar radiation is incident upon a spacecraft surface (ILUMANG); (3) the spacecraft surface temperature (a radiator or photovoltaic array) in kelvin, the surface absorptivity-to-emissivity ratio alpha/epsilon with respect to the solar radiation and (alpha/epsilon)(sub 2) with respect to planetary radiation; and (4) the surface view factor to space F. Outputs from the code have been used to determine environmental temperatures in various Earth orbits. The code was also utilized as a subprogram in the design of power system radiators for deep-space probes.
Superconducting quantum circuits at the surface code threshold for fault tolerance.
Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M
2014-04-24
A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.
Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO
NASA Technical Reports Server (NTRS)
Stallworth, R.; Meyers, C. A.; Stinson, H. C.
1989-01-01
Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.
Computational Predictions of the Performance Wright 'Bent End' Propellers
NASA Technical Reports Server (NTRS)
Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)
2002-01-01
Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
1963-10-04
Tolerances of Transducer Elements and Preamplifiers on Beam Formation and SSI Performance in the AN/SQS-26 Sonar Equipment (U)", TRACOR Document Number 63...SQS-26 SONAR EQUIPMENT (U) Prepared for GROLP - 4 DOWNGRADED AT% YEAR INTERVALS: l LJ.I The Bureau of Ships DECLASSIFIED A ER 12 YEARS. r . Code 688E t...ON.PERATION OF THEP ,,, Ts 4a nAinS-26 SONAR pul i~ ~ ~ ~ ~ ~ ~ ~~%,i forre o teSFXPora aaeet Prepared for Bull by: DSS11TIAVAILAIIL CODES The Bureau of Ships
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohr, C.L.; Rausch, W.N.; Hesson, G.M.
The LOCA Simulation Program in the NRU reactor is the first set of experiments to provide data on the behavior of full-length, nuclear-heated PWR fuel bundles during the heatup, reflood, and quench phases of a loss-of-coolant accident (LOCA). This paper compares the temperature time histories of 4 experimental test cases with 4 computer codes: CE-THERM, FRAP-T5, GT3-FLECHT, and TRUMP-FLECHT. The preliminary comparisons between prediction and experiment show that the state-of-the art fuel codes have large uncertainties and are not necessarily conservative in predicting peak temperatures, turn around times, and bundle quench times.
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
NASA Astrophysics Data System (ADS)
Wu, Tao; Higashiguchi, Takeshi; Li, Bowen; Arai, Goki; Hara, Hiroyuki; Kondo, Yoshiki; Miyazaki, Takanori; Dinh, Thanh-Hung; O'Reilly, Fergal; Sokell, Emma; O'Sullivan, Gerry
2017-02-01
Soft x-ray and extreme ultraviolet (XUV) spectra from lead (Pb, Z=82) laser-produced plasmas (LPPs) were measured in the 1.0-7.0 nm wavelength region employing a 150-ps, 1064-nm Nd:YAG laser with focused power densities in the range from 3.1×1013 W/cm2 to 1.4×1014 W/cm2. The flexible atomic code (FAC) and the Cowan's suite of atomic structure codes were applied to compute and explain the radiation properties of the lead spectra observed. The most prominent structure in the spectra is a broad double peak, which is produced by Δn=0, n=4-4 and Δn=1, n=4-5 transition arrays emitted from highly charged lead ions. The emission characteristics of Δn=1, n=4-5 transitions were investigated by the use of the unresolved transition arrays (UTAs) model. Numerous new spectral features generated by Δn=1, n=4-5 transitions in ions from Pb21+ to Pb45+ are discerned with the aid of the results from present computations as well as consideration of previous theoretical predictions and experimental data.
High-Fidelity Computational Aerodynamics of the Elytron 4S UAV
NASA Technical Reports Server (NTRS)
Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.
2018-01-01
High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.
EXFILE: A program for compiling irradiation data on UN and UC fuel pins
NASA Technical Reports Server (NTRS)
Mayer, J. T.; Smith, R. L.; Weinstein, M. B.; Davison, H. W.
1973-01-01
A FORTRAN-4 computer program for handling fuel pin data is described. Its main features include standardized output, easy access for data manipulation, and tabulation of important material property data. An additional feature allows simplified preparation of input decks for a fuel swelling computer code (CYGRO-2). Data from over 300 high temperature nitride and carbide based fuel pin irradiations are listed.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
An Investigation of the Flow Physics of Acoustic Liners by Direct Numerical Simulation
NASA Technical Reports Server (NTRS)
Watson, Willie R. (Technical Monitor); Tam, Christopher
2004-01-01
This report concentrates on reporting the effort and status of work done on three dimensional (3-D) simulation of a multi-hole resonator in an impedance tube. This work is coordinated with a parallel experimental effort to be carried out at the NASA Langley Research Center. The outline of this report is as follows : 1. Preliminary consideration. 2. Computation model. 3. Mesh design and parallel computing. 4. Visualization. 5. Status of computer code development. 1. Preliminary Consideration.
Cloud4Psi: cloud computing for 3D protein structure similarity searching.
Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur
2014-10-01
Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. © The Author 2014. Published by Oxford University Press.
Cloud4Psi: cloud computing for 3D protein structure similarity searching
Mrozek, Dariusz; Małysiak-Mrozek, Bożena; Kłapciński, Artur
2014-01-01
Summary: Popular methods for 3D protein structure similarity searching, especially those that generate high-quality alignments such as Combinatorial Extension (CE) and Flexible structure Alignment by Chaining Aligned fragment pairs allowing Twists (FATCAT) are still time consuming. As a consequence, performing similarity searching against large repositories of structural data requires increased computational resources that are not always available. Cloud computing provides huge amounts of computational power that can be provisioned on a pay-as-you-go basis. We have developed the cloud-based system that allows scaling of the similarity searching process vertically and horizontally. Cloud4Psi (Cloud for Protein Similarity) was tested in the Microsoft Azure cloud environment and provided good, almost linearly proportional acceleration when scaled out onto many computational units. Availability and implementation: Cloud4Psi is available as Software as a Service for testing purposes at: http://cloud4psi.cloudapp.net/. For source code and software availability, please visit the Cloud4Psi project home page at http://zti.polsl.pl/dmrozek/science/cloud4psi.htm. Contact: dariusz.mrozek@polsl.pl PMID:24930141
Utilizing GPUs to Accelerate Turbomachinery CFD Codes
NASA Technical Reports Server (NTRS)
MacCalla, Weylin; Kulkarni, Sameer
2016-01-01
GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Perry, Boyd, III; Florance, James R.; Sanetrik, Mark D.; Wieseman, Carol D.; Stevens, William L.; Funk, Christie J.; Hur, Jiyoung; Christhilf, David M.; Coulson, David A.
2011-01-01
A summary of computational and experimental aeroelastic and aeroservoelastic (ASE) results for the Semi-Span Super-Sonic Transport (S4T) wind-tunnel model is presented. A broad range of analyses and multiple ASE wind-tunnel tests of the S4T have been performed in support of the ASE element in the Supersonics Program, part of NASA's Fundamental Aeronautics Program. The computational results to be presented include linear aeroelastic and ASE analyses, nonlinear aeroelastic analyses using an aeroelastic CFD code, and rapid aeroelastic analyses using CFD-based reduced-order models (ROMs). Experimental results from two closed-loop wind-tunnel tests performed at NASA Langley's Transonic Dynamics Tunnel (TDT) will be presented as well.
PASCO: Structural panel analysis and sizing code: Users manual - Revised
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Stroud, W. J.; Durling, B. J.; Hennessy, K. W.
1981-01-01
A computer code denoted PASCO is described for analyzing and sizing uniaxially stiffened composite panels. Buckling and vibration analyses are carried out with a linked plate analysis computer code denoted VIPASA, which is included in PASCO. Sizing is based on nonlinear mathematical programming techniques and employs a computer code denoted CONMIN, also included in PASCO. Design requirements considered are initial buckling, material strength, stiffness and vibration frequency. A user's manual for PASCO is presented.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Williams, C. H.; Spurlock, O. F.
2014-01-01
From the late 1960's through 1997, the leadership of NASA's Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRC's primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the code's operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960's is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the Atlas/Centaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (Atlas/Centaur, Titan/Centaur, and Shuttle/Centaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUP's many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.
DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization
NASA Technical Reports Server (NTRS)
Spurlock, O. Frank; Williams, Craig H.
2015-01-01
From the late 1960s through 1997, the leadership of NASAs Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRCs primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the codes operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960s is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the AtlasCentaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (AtlasCentaur, TitanCentaur, and ShuttleCentaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUPs many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.
A Supersonic Argon/Air Coaxial Jet Experiment for Computational Fluid Dynamics Code Validation
NASA Technical Reports Server (NTRS)
Clifton, Chandler W.; Cutler, Andrew D.
2007-01-01
A non-reacting experiment is described in which data has been acquired for the validation of CFD codes used to design high-speed air-breathing engines. A coaxial jet-nozzle has been designed to produce pressure-matched exit flows of Mach 1.8 at 1 atm in both a center jet of argon and a coflow jet of air, creating a supersonic, incompressible mixing layer. The flowfield was surveyed using total temperature, gas composition, and Pitot probes. The data set was compared to CFD code predictions made using Vulcan, a structured grid Navier-Stokes code, as well as to data from a previous experiment in which a He-O2 mixture was used instead of argon in the center jet of the same coaxial jet assembly. Comparison of experimental data from the argon flowfield and its computational prediction shows that the CFD produces an accurate solution for most of the measured flowfield. However, the CFD prediction deviates from the experimental data in the region downstream of x/D = 4, underpredicting the mixing-layer growth rate.
Performance Analysis, Modeling and Scaling of HPC Applications and Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav
2016-01-13
E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research alongmore » the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.« less
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
The path toward HEP High Performance Computing
NASA Astrophysics Data System (ADS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-06-01
High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.
Thompson, Robert; Tanimoto, Steve; Lyman, Ruby Dawn; Geselowitz, Kira; Begay, Kristin Kawena; Nielsen, Kathleen; Nagy, William; Abbott, Robert; Raskind, Marshall; Berninger, Virginia
2018-05-01
Children in grades 4 to 6 ( N =14) who despite early intervention had persisting dyslexia (impaired word reading and spelling) were assessed before and after computerized reading and writing instruction aimed at subword, word, and syntax skills shown in four prior studies to be effective for treating dyslexia. During the 12 two-hour sessions once a week after school they first completed HAWK Letters in Motion© for manuscript and cursive handwriting, HAWK Words in Motion© for phonological, orthographic, and morphological coding for word reading and spelling, and HAWK Minds in Motion© for sentence reading comprehension and written sentence composing. A reading comprehension activity in which sentences were presented one word at a time or one added word at a time was introduced. Next, to instill hope they could overcome their struggles with reading and spelling, they read and discussed stories about struggles of Buckminister Fuller who overcame early disabilities to make important contributions to society. Finally, they engaged in the new Kokopelli's World (KW)©, blocks-based online lessons, to learn computer coding in introductory programming by creating stories in sentence blocks (Tanimoto and Thompson 2016). Participants improved significantly in hallmark word decoding and spelling deficits of dyslexia, three syntax skills (oral construction, listening comprehension, and written composing), reading comprehension (with decoding as covariate), handwriting, orthographic and morphological coding, orthographic loop, and inhibition (focused attention). They answered more reading comprehension questions correctly when they had read sentences presented one word at a time (eliminating both regressions out and regressions in during saccades) than when presented one added word at a time (eliminating only regressions out during saccades). Indicators of improved self-efficacy that they could learn to read and write were observed. Reminders to pay attention and stay on task needed before adding computer coding were not needed after computer coding was added.
NASA Rotor 37 CFD Code Validation: Glenn-HT Code
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2010-01-01
In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.
Final report for the Tera Computer TTI CRADA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, G.S.; Pavlakos, C.; Silva, C.
1997-01-01
Tera Computer and Sandia National Laboratories have completed a CRADA, which examined the Tera Multi-Threaded Architecture (MTA) for use with large codes of importance to industry and DOE. The MTA is an innovative architecture that uses parallelism to mask latency between memories and processors. The physical implementation is a parallel computer with high cross-section bandwidth and GaAs processors designed by Tera, which support many small computation threads and fast, lightweight context switches between them. When any thread blocks while waiting for memory accesses to complete, another thread immediately begins execution so that high CPU utilization is maintained. The Tera MTAmore » parallel computer has a single, global address space, which is appealing when porting existing applications to a parallel computer. This ease of porting is further enabled by compiler technology that helps break computations into parallel threads. DOE and Sandia National Laboratories were interested in working with Tera to further develop this computing concept. While Tera Computer would continue the hardware development and compiler research, Sandia National Laboratories would work with Tera to ensure that their compilers worked well with important Sandia codes, most particularly CTH, a shock physics code used for weapon safety computations. In addition to that important code, Sandia National Laboratories would complete research on a robotic path planning code, SANDROS, which is important in manufacturing applications, and would evaluate the MTA performance on this code. Finally, Sandia would work directly with Tera to develop 3D visualization codes, which would be appropriate for use with the MTA. Each of these tasks has been completed to the extent possible, given that Tera has just completed the MTA hardware. All of the CRADA work had to be done on simulators.« less
Operations analysis (study 2.1). Program listing for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1974-01-01
A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferng, Y.M.; Liao, L.Y.
1996-01-01
During the operating history of the Maanshan nuclear power plant (MNPP), five reactor trips have occurred as a result of the moisture separator reheater (MSR) high-level signal. These MSR high-level reactor trips have been a very serious concern, especially during the startup period of MNPP. Consequently, studying the physical phenomena of this particular event is worthwhile, and analytical work is performed using the RELAP5/MOD3 code to investigate the thermal-hydraulic phenomena of two-phase behaviors occurring within the MSR high-level reactor trips. The analytical model is first assessed against the experimental data obtained from several test loops. The same model can thenmore » be applied with confidence to the study of this topic. According to the present calculated results, the phenomena of liquid droplet accumulation ad residual liquid blowing in the horizontal section of cross-under-lines can be modeled. In addition, the present model can also predict the different increasing rates of inlet steam flow rate affecting the liquid accumulation within the cross-under-lines. The calculated conclusion is confirmed by the revised startup procedure of MNPP.« less
Loss-of-Flow and Loss-of-Pressure Simulations of the BR2 Research Reactor with HEU and LEU Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Licht, J.; Bergeron, A.; Dionne, B.
2016-01-01
Belgian Reactor 2 (BR2) is a research and test reactor located in Mol, Belgium and is primarily used for radioisotope production and materials testing. The Materials Management and Minimization (M3) Reactor Conversion Program of the National Nuclear Security Administration (NNSA) is supporting the conversion of the BR2 reactor from Highly Enriched Uranium (HEU) fuel to Low Enriched Uranium (LEU) fuel. The reactor core of BR2 is located inside a pressure vessel that contains 79 channels in a hyperboloid configuration. The core configuration is highly variable as each channel can contain a fuel assembly, a control or regulating rod, an experimentalmore » device, or a beryllium or aluminum plug. Because of this variability, a representative core configuration, based on current reactor use, has been defined for the fuel conversion analyses. The code RELAP5/Mod 3.3 was used to perform the transient thermal-hydraulic safety analyses of the BR2 reactor to support reactor conversion. The input model has been modernized relative to that historically used at BR2 taking into account the best modeling practices developed by Argonne National Laboratory (ANL) and BR2 engineers.« less
ERIC Educational Resources Information Center
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
1984-06-01
Mt n o ro " g < - OD-O)C 0N v : _grI40N40 O I0 eeg gr, Wn *, c.M b-C N Z ý VN dN N C4 C4 C4 e"Ř!02AWVý 00 0 P- 1( or . . . . . . . . . i...the ABRES Shape Change Code (ASCC)," Acurex Report TM -80-31/AS, July 1980. 3. M. J. Abbett, "Finite Difference Solution of the Subsonic/Supersonic...Development Command US Army AMCCOM Technical Support Activity ATTN: DRSMC- TDC (D) ATTN: DELSD-L DRSMC-TSS (D) Fort Monmouth, NJ 07703 DRSMC-LCA-F (D) Mr. 0
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and practical application of the code will allow carrying out in the nearest future the computations to analyze the safety of potential NPP projects at a qualitatively higher level.
Performance assessment of KORAT-3D on the ANL IBM-SP computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexeyev, A.V.; Zvenigorodskaya, O.A.; Shagaliev, R.M.
1999-09-01
The TENAR code is currently being developed at the Russian Federal Nuclear Center (VNIIEF) as a coupled dynamics code for the simulation of transients in VVER and RBMK systems and other nuclear systems. The neutronic module in this code system is KORAT-3D. This module is also one of the most computationally intensive components of the code system. A parallel version of KORAT-3D has been implemented to achieve the goal of obtaining transient solutions in reasonable computational time, particularly for RBMK calculations that involve the application of >100,000 nodes. An evaluation of the KORAT-3D code performance was recently undertaken on themore » Argonne National Laboratory (ANL) IBM ScalablePower (SP) parallel computer located in the Mathematics and Computer Science Division of ANL. At the time of the study, the ANL IBM-SP computer had 80 processors. This study was conducted under the auspices of a technical staff exchange program sponsored by the International Nuclear Safety Center (INSC).« less
High temperature composite analyzer (HITCAN) user's manual, version 1.0
NASA Technical Reports Server (NTRS)
Lackney, J. J.; Singhal, S. N.; Murthy, P. L. N.; Gotsis, P.
1993-01-01
This manual describes 'how-to-use' the computer code, HITCAN (HIgh Temperature Composite ANalyzer). HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. This code combines composite mechanics and laminate theory with an internal data base for material properties of the constituents (matrix, fiber and interphase). The thermo-mechanical properties of the constituents are considered to be nonlinearly dependent on several parameters including temperature, stress and stress rate. The computation procedure for the analysis of the composite structures uses the finite element method. HITCAN is written in FORTRAN 77 computer language and at present has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. This manual describes HlTCAN's capabilities and limitations followed by input/execution/output descriptions and example problems. The input is described in detail including (1) geometry modeling, (2) types of finite elements, (3) types of analysis, (4) material data, (5) types of loading, (6) boundary conditions, (7) output control, (8) program options, and (9) data bank.
NASA Astrophysics Data System (ADS)
Alvanos, Michail; Christoudias, Theodoros
2017-10-01
This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.
CO-FIRING COAL: FEEDLOT AND LITTER BIOMASS FUELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Kalyan Annamalai; Dr. John Sweeten; Dr. Sayeed Mukhtar
2000-10-24
The following are proposed activities for quarter 1 (6/15/00-9/14/00): (1) Finalize the allocation of funds within TAMU to co-principal investigators and the final task lists; (2) Acquire 3 D computer code for coal combustion and modify for cofiring Coal:Feedlot biomass and Coal:Litter biomass fuels; (3) Develop a simple one dimensional model for fixed bed gasifier cofired with coal:biomass fuels; and (4) Prepare the boiler burner for reburn tests with feedlot biomass fuels. The following were achieved During Quarter 5 (6/15/00-9/14/00): (1) Funds are being allocated to co-principal investigators; task list from Prof. Mukhtar has been received (Appendix A); (2) Ordermore » has been placed to acquire Pulverized Coal gasification and Combustion 3 D (PCGC-3) computer code for coal combustion and modify for cofiring Coal: Feedlot biomass and Coal: Litter biomass fuels. Reason for selecting this code is the availability of source code for modification to include biomass fuels; (3) A simplified one-dimensional model has been developed; however convergence had not yet been achieved; and (4) The length of the boiler burner has been increased to increase the residence time. A premixed propane burner has been installed to simulate coal combustion gases. First coal, as a reburn fuel will be used to generate base line data followed by methane, feedlot and litter biomass fuels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas; Hamilton, Steven; Slattery, Stuart
Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
Waddell, Amanda; Ahrens, Richard; Tsai, Yi-Ting; Sherrill, Joseph D; Denson, Lee A; Steinbrecher, Kris A; Hogan, Simon P
2013-05-01
In inflammatory bowel diseases (IBDs), particularly ulcerative colitis, intestinal macrophages (MΦs), eosinophils, and the eosinophil-selective chemokine CCL11, have been associated with disease pathogenesis. MΦs, a source of CCL11, have been reported to be of a mixed classical (NF-κB-mediated) and alternatively activated (STAT-6-mediated) phenotype. The importance of NF-κB and STAT-6 pathways to the intestinal MΦ/CCL11 response and eosinophilic inflammation in the histopathology of experimental colitis is not yet understood. Our gene array analyses demonstrated elevated STAT-6- and NF-κB-dependent genes in pediatric ulcerative colitis colonic biopsies. Dextran sodium sulfate (DSS) exposure induced STAT-6 and NF-κB activation in mouse intestinal F4/80(+)CD11b(+)Ly6C(hi) (inflammatory) MΦs. DSS-induced CCL11 expression, eosinophilic inflammation, and histopathology were attenuated in RelA/p65(Δmye) mice, but not in the absence of STAT-6. Deletion of p65 in myeloid cells did not affect inflammatory MΦ recruitment or alter apoptosis, but did attenuate LPS-induced cytokine production (IL-6) and Ccl11 expression in purified F4/80(+)CD11b(+)Ly6C(hi) inflammatory MΦs. Molecular and cellular analyses revealed a link between expression of calprotectin (S100a8/S100a9), Ccl11 expression, and eosinophil numbers in the DSS-treated colon. In vitro studies of bone marrow-derived MΦs showed calprotectin-induced CCL11 production via a p65-dependent mechanism. Our results indicate that myeloid cell-specific NF-κB-dependent pathways play an unexpected role in CCL11 expression and maintenance of eosinophilic inflammation in experimental colitis. These data indicate that targeting myeloid cells and NF-κB-dependent pathways may be of therapeutic benefit for the treatment of eosinophilic inflammation and histopathology in IBD.
Waddell, Amanda; Ahrens, Richard; Tsai, Yi Ting; Sherrill, Joseph D.; Denson, Lee A.; Steinbrecher, Kris A.; Hogan, Simon P.
2014-01-01
In inflammatory bowel diseases (IBD), particularly ulcerative colitis (UC), intestinal macrophages (MΦs), eosinophils and the eosinophil-selective chemokine CCL11 have been associated with disease pathogenesis. MΦs, a source of CCL11, have been reported to be of a mixed classical (NF-κB-mediated) and alternatively activated (STAT-6-mediated) phenotype. The importance of NF-κB and STAT-6 pathways to the intestinal MΦ/CCL11 response and eosinophilic inflammation in the histopathology of experimental colitis is not yet understood. Our gene array analyses demonstrated elevated STAT-6- and NF-κB-dependent genes in pediatric UC colonic biopsies. Dextran sodium sulphate (DSS) exposure induced STAT-6 and NF-κB activation in mouse intestinal F4/80+CD11b+Ly6Chi (inflammatory) MΦs. DSS-induced CCL11 expression, eosinophilic inflammation and histopathology were attenuated in RelA/p65Δmye mice but not in the absence of STAT-6. Deletion of p65 in myeloid cells did not affect inflammatory MΦ recruitment or alter apoptosis, but did attenuate lipopolysaccharide-induced cytokine production (IL-6) and Ccl11 expression in purified F4/80+CD11b+Ly6Chi inflammatory MΦs. Molecular and cellular analyses revealed a link between expression of calprotectin (S100a8/S100a9), Ccl11 expression and eosinophil numbers in the DSS-treated colon. In vitro studies of bone marrow-derived MΦs showed calprotectin-induced CCL11 production via a p65-dependent mechanism. Our results indicate that myeloid cell-specific NF-κB-dependent pathways play an unexpected role in CCL11 expression and maintenance of eosinophilic inflammation in experimental colitis. These data indicate that targeting myeloid cells and NF-κB-dependent pathways may be of therapeutic benefit for the treatment of eosinophilic inflammation and histopathology in IBD. PMID:23562811
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1973-01-01
The engineering programming information for the digital computer program for analyzing shell structures is presented. The program is designed to permit small changes such as altering the geometry or a table size to fit the specific requirements. Each major subroutine is discussed and the following subjects are included: (1) subroutine description, (2) pertinent engineering symbols and the FORTRAN coded counterparts, (3) subroutine flow chart, and (4) subroutine FORTRAN listing.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
2,445 Hours of Code: What I Learned from Facilitating Hour of Code Events in High School Libraries
ERIC Educational Resources Information Center
Colby, Jennifer
2015-01-01
This article describes a school librarian's experience with initiating an Hour of Code event for her school's student body. Hadi Partovi of Code.org conceived the Hour of Code "to get ten million students to try one hour of computer science" (Partovi, 2013a), which is implemented during Computer Science Education Week with a goal of…
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
1991-01-01
Computations from two Navier-Stokes codes, NSS and F3D, are presented for a tangent-ogive-cylinder body at high angle of attack. Features of this steady flow include a pair of primary vortices on the leeward side of the body as well as secondary vortices. The topological and physical plausibility of this vortical structure is discussed. The accuracy of these codes are assessed by comparison of the numerical solutions with experimental data. The effects of turbulence model, numerical dissipation, and grid refinement are presented. The overall efficiency of these codes are also assessed by examining their convergence rates, computational time per time step, and maximum allowable time step for time-accurate computations. Overall, the numerical results from both codes compared equally well with experimental data, however, the NSS code was found to be significantly more efficient than the F3D code.
User's Manual for FEMOM3DR. Version 1.0
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
FEMoM3DR is a computer code written in FORTRAN 77 to compute radiation characteristics of antennas on 3D body using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. The code is written to handle different feeding structures like coaxial line, rectangular waveguide, and circular waveguide. This code uses the tetrahedral elements, with vector edge basis functions for FEM and triangular elements with roof-top basis functions for MoM. By virtue of FEM, this code can handle any arbitrary shaped three dimensional bodies with inhomogeneous lossy materials; and due to MoM the computational domain can be terminated in any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
User's manual for a material transport code on the Octopus Computer Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naymik, T.G.; Mendez, G.D.
1978-09-15
A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.
NASA Technical Reports Server (NTRS)
Logan, Terry G.
1994-01-01
The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.
Computer Description of the M561 Utility Truck
1984-10-01
GIFT Computer Code Sustainabi1ity Predictions for Army Spare Components Requirements for Combat (SPARC) 20. ABSTRACT (Caotfmia «a NWM eitim ft...used as input to the GIFT computer code to generate target vulnerability data. DO FORM V JAM 73 1473 EDITION OF I NOV 65 IS OBSOLETE Unclass i f ied...anaLyiis requires input from the Geometric Information for Targets ( GIFT ) ’ computer code. This report documents the combina- torial geometry (Com-Geom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyler, L L; Trent, D S; Budden, M J
During the course of the TEMPEST computer code development a concurrent effort was conducted to assess the code's performance and the validity of computed results. The results of this work are presented in this document. The principal objective of this effort was to assure the code's computational correctness for a wide range of hydrothermal phenomena typical of fast breeder reactor application. 47 refs., 94 figs., 6 tabs.
Validation of CFD Codes for Parawing Geometries in Subsonic to Supersonic Flows
NASA Technical Reports Server (NTRS)
Cruz-Ayoroa, Juan G.; Garcia, Joseph A.; Melton, John E.
2014-01-01
Computational Fluid Dynamic studies of a rigid parawing at Mach numbers from 0.8 to 4.65 were carried out using three established inviscid, viscous and independent panel method codes. Pressure distributions along four chordwise sections of the wing were compared to experimental wind tunnel data gathered from NASA technical reports. Results show good prediction of the overall trends and magnitudes of the pressure distributions for the inviscid and viscous solvers. Pressure results for the panel method code diverge from test data at large angles of attack due to shock interaction phenomena. Trends in the flow behavior and their effect on the integrated force and moments on this type of wing are examined in detail using the inviscid CFD code results.
Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August
2018-07-01
Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adiabatic topological quantum computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cesare, Chris; Landahl, Andrew J.; Bacon, Dave
Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less
Adiabatic topological quantum computing
Cesare, Chris; Landahl, Andrew J.; Bacon, Dave; ...
2015-07-31
Topological quantum computing promises error-resistant quantum computation without active error correction. However, there is a worry that during the process of executing quantum gates by braiding anyons around each other, extra anyonic excitations will be created that will disorder the encoded quantum information. Here, we explore this question in detail by studying adiabatic code deformations on Hamiltonians based on topological codes, notably Kitaev’s surface codes and the more recently discovered color codes. We develop protocols that enable universal quantum computing by adiabatic evolution in a way that keeps the energy gap of the system constant with respect to the computationmore » size and introduces only simple local Hamiltonian interactions. This allows one to perform holonomic quantum computing with these topological quantum computing systems. The tools we develop allow one to go beyond numerical simulations and understand these processes analytically.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preece, D.S.; Knudsen, S.D.
The spherical element computer code DMC (Distinct Motion Code) used to model rock motion resulting from blasting has been enhanced to allow routine computer simulations of bench blasting. The enhancements required for bench blast simulation include: (1) modifying the gas flow portion of DMC, (2) adding a new explosive gas equation of state capability, (3) modifying the porosity calculation, and (4) accounting for blastwell spacing parallel to the face. A parametric study performed with DMC shows logical variation of the face velocity as burden, spacing, blastwell diameter and explosive type are varied. These additions represent a significant advance in themore » capability of DMC which will not only aid in understanding the physics involved in blasting but will also become a blast design tool. 8 refs., 7 figs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Papers are presented on local area networks; formal methods for communication protocols; computer simulation of communication systems; spread spectrum and coded communications; tropical radio propagation; VLSI for communications; strategies for increasing software productivity; multiple access communications; advanced communication satellite technologies; and spread spectrum systems. Topics discussed include Space Station communication and tracking development and design; transmission networks; modulation; data communications; computer network protocols and performance; and coding and synchronization. Consideration is given to free space optical communications systems; VSAT communication networks; network topology design; advances in adaptive filtering echo cancellation and adaptive equalization; advanced signal processing for satellite communications; the elements, design, and analysis of fiber-optic networks; and advances in digital microwave systems.
Comprehensive silicon solar cell computer modeling
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1984-01-01
The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.
Fast Computation of the Two-Point Correlation Function in the Age of Big Data
NASA Astrophysics Data System (ADS)
Pellegrino, Andrew; Timlin, John
2018-01-01
We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less
2009-09-01
Interface IFR Instrument Flight Rules LANTIRN Low-Altitude Navigation and Targeting Infrared for Night MANTIRN Medium Altitude Navigation and...MANTIRN categories, and IFR weather categories. Aside from the category of personnel (computer specialist NCOs rather than pilots), the main...of the node, (2) Adding a description, (3) Implementing event arguments , local variables, and state transitions, (4) Implementing a code that is
Application of Modular Building Block Databus to Air Force Systems
1988-06-01
City, State, and ZIP Code) Electronic Systems Division, AFSC Hanscom AFB, MA 01731-5000 10. SOURCE OF FUNDING NUMBERS PROGRAM ELEMENT NO...implement remote monitoring and control of the modules. Computer assistance is available for these processes. Cabinets are independent of the shelter...3 fc to the red databus. Located between the two databuses is the computer sup- porting the technical control position (figure 4) as well as
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.