Sample records for computer code capable

  1. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  2. Computer search for binary cyclic UEP codes of odd length up to 65

    NASA Technical Reports Server (NTRS)

    Lin, Mao-Chao; Lin, Chi-Chang; Lin, Shu

    1990-01-01

    Using an exhaustive computation, the unequal error protection capabilities of all binary cyclic codes of odd length up to 65 that have minimum distances at least 3 are found. For those codes that can only have upper bounds on their unequal error protection capabilities computed, an analytic method developed by Dynkin and Togonidze (1976) is used to show that the upper bounds meet the exact unequal error protection capabilities.

  3. Advances in Computational Capabilities for Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Kumar, Ajay; Gnoffo, Peter A.; Moss, James N.; Drummond, J. Philip

    1997-01-01

    The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.

  4. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  5. Computation of Reacting Flows in Combustion Processes

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Chen, Kuo-Huey

    1997-01-01

    The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.

  6. Finite difference time domain electromagnetic scattering from frequency-dependent lossy materials

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Beggs, John H.

    1991-01-01

    Four different FDTD computer codes and companion Radar Cross Section (RCS) conversion codes on magnetic media are submitted. A single three dimensional dispersive FDTD code for both dispersive dielectric and magnetic materials was developed, along with a user's manual. The extension of FDTD to more complicated materials was made. The code is efficient and is capable of modeling interesting radar targets using a modest computer workstation platform. RCS results for two different plate geometries are reported. The FDTD method was also extended to computing far zone time domain results in two dimensions. Also the capability to model nonlinear materials was incorporated into FDTD and validated.

  7. Development, Verification and Validation of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.

  8. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGrail, B.P.; Mahoney, L.A.

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less

  9. Computational Fluid Dynamics Technology for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2003-01-01

    Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

  10. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  11. Numerical, Analytical, Experimental Study of Fluid Dynamic Forces in Seals Volume 6: Description of Scientific CFD Code SCISEAL

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh; Przekwas, Andrzej

    2004-01-01

    The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allow the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.

  12. Development and Verification of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.

  13. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  14. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  15. Computing NLTE Opacities -- Node Level Parallel Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holladay, Daniel

    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  16. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    NASA Technical Reports Server (NTRS)

    Olsen, M. E.; Liu, Y.; Vinokur, M.; Olsen, T.

    2003-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  17. Implementation of Premixed Equilibrium Chemistry Capability in OVERFLOW

    NASA Technical Reports Server (NTRS)

    Olsen, Mike E.; Liu, Yen; Vinokur, M.; Olsen, Tom

    2004-01-01

    An implementation of premixed equilibrium chemistry has been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flowfields. The implementation builds on the computational efficiency and geometric generality of the solver.

  18. User's manual for semi-circular compact range reflector code

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1986-01-01

    A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.

  19. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  20. Implementation of Finite Rate Chemistry Capability in OVERFLOW

    NASA Technical Reports Server (NTRS)

    Olsen, M. E.; Venkateswaran, S.; Prabhu, D. K.

    2004-01-01

    An implementation of both finite rate and equilibrium chemistry have been completed for the OVERFLOW code, a chimera capable, complex geometry flow code widely used to predict transonic flow fields. The implementation builds on the computational efficiency and geometric generality of the solver.

  1. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1994-01-01

    Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.

  2. Investigating the impact of the cielo cray XE6 architecture on scientific application codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Mahesh; Barrett, Richard; Pedretti, Kevin Thomas Tauke

    2010-12-01

    Cielo, a Cray XE6, is the Department of Energy NNSA Advanced Simulation and Computing (ASC) campaign's newest capability machine. Rated at 1.37 PFLOPS, it consists of 8,944 dual-socket oct-core AMD Magny-Cours compute nodes, linked using Cray's Gemini interconnect. Its primary mission objective is to enable a suite of the ASC applications implemented using MPI to scale to tens of thousands of cores. Cielo is an evolutionary improvement to a successful architecture previously available to many of our codes, thus enabling a basis for understanding the capabilities of this new architecture. Using three codes strategically important to the ASC campaign, andmore » supplemented with some micro-benchmarks that expose the fundamental capabilities of the XE6, we report on the performance characteristics and capabilities of Cielo.« less

  3. Antenna pattern study, task 2

    NASA Technical Reports Server (NTRS)

    Harper, Warren

    1989-01-01

    Two electromagnetic scattering codes, NEC-BSC and ESP3, were delivered and installed on a NASA VAX computer for use by Marshall Space Flight Center antenna design personnel. The existing codes and certain supplementary software were updated, the codes installed on a computer that will be delivered to the customer, to provide capability for graphic display of the data to be computed by the use of the codes and to assist the customer in the solution of specific problems that demonstrate the use of the codes. With the exception of one code revision, all of these tasks were performed.

  4. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  5. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  6. Development of Reduced-Order Models for Aeroelastic and Flutter Prediction Using the CFL3Dv6.0 Code

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Bartels, Robert E.

    2002-01-01

    A reduced-order model (ROM) is developed for aeroelastic analysis using the CFL3D version 6.0 computational fluid dynamics (CFD) code, recently developed at the NASA Langley Research Center. This latest version of the flow solver includes a deforming mesh capability, a modal structural definition for nonlinear aeroelastic analyses, and a parallelization capability that provides a significant increase in computational efficiency. Flutter results for the AGARD 445.6 Wing computed using CFL3D v6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are then computed using the CFL3Dv6 code and transformed into state-space form. Important numerical issues associated with the computation of the impulse responses are presented. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is used to rapidly compute aeroelastic transients including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly.

  7. Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian

    2011-01-01

    Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.

  8. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less

  9. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  10. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  11. Presentation of computer code SPIRALI for incompressible, turbulent, plane and spiral grooved cylindrical and face seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.

    1994-01-01

    A viewgraph presentation is made showing the capabilities of the computer code SPIRALI. Overall capabilities of SPIRALI include: computes rotor dynamic coefficients, flow, and power loss for cylindrical and face seals; treats turbulent, laminar, Couette, and Poiseuille dominated flows; fluid inertia effects are included; rotor dynamic coefficients in three (face) or four (cylindrical) degrees of freedom; includes effects of spiral grooves; user definable transverse film geometry including circular steps and grooves; independent user definable friction factor models for rotor and stator; and user definable loss coefficients for sudden expansions and contractions.

  12. User manual for semi-circular compact range reflector code: Version 2

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1987-01-01

    A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.

  13. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  14. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  15. Establishment of a Beta Test Center for the NPARC Code at Central State University

    NASA Technical Reports Server (NTRS)

    Okhio, Cyril B.

    1996-01-01

    Central State University has received a supplementary award to purchase computer workstations for the NPARC (National Propulsion Ames Research Center) computational fluid dynamics code BETA Test Center. The computational code has also been acquired for installation on the workstations. The acquisition of this code is an initial step for CSU in joining an alliance composed of NASA, AEDC, The Aerospace Industry, and academia. A post-Doctoral research Fellow from a neighboring university will assist the PI in preparing a template for Tutorial documents for the BETA test center. The major objective of the alliance is to establish a national applications-oriented CFD capability, centered on the NPARC code. By joining the alliance, the BETA test center at CSU will allow the PI, as well as undergraduate and post-graduate students to test the capability of the NPARC code in predicting the physics of aerodynamic/geometric configurations that are of interest to the alliance. Currently, CSU is developing a once a year, hands-on conference/workshop based upon the experience acquired from running other codes similar to the NPARC code in the first year of this grant.

  16. Crashdynamics with DYNA3D: Capabilities and research directions

    NASA Technical Reports Server (NTRS)

    Whirley, Robert G.; Engelmann, Bruce E.

    1993-01-01

    The application of the explicit nonlinear finite element analysis code DYNA3D to crashworthiness problems is discussed. Emphasized in the first part of this work are the most important capabilities of an explicit code for crashworthiness analyses. The areas with significant research promise for the computational simulation of crash events are then addressed.

  17. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  19. Progressive fracture of fiber composites

    NASA Technical Reports Server (NTRS)

    Irvin, T. B.; Ginty, C. A.

    1983-01-01

    Refined models and procedures are described for determining progressive composite fracture in graphite/epoxy angleplied laminates. Lewis Research Center capabilities are utilized including the Real Time Ultrasonic C Scan (RUSCAN) experimental facility and the Composite Durability Structural Analysis (CODSTRAN) computer code. The CODSTRAN computer code is used to predict the fracture progression based on composite mechanics, finite element stress analysis, and fracture criteria modules. The RUSCAN facility, CODSTRAN computer code, and scanning electron microscope are used to determine durability and identify failure mechanisms in graphite/epoxy composites.

  20. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  1. PerSEUS: Ultra-Low-Power High Performance Computing for Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Doxas, I.; Andreou, A.; Lyon, J.; Angelopoulos, V.; Lu, S.; Pritchett, P. L.

    2017-12-01

    Peta-op SupErcomputing Unconventional System (PerSEUS) aims to explore the use for High Performance Scientific Computing (HPC) of ultra-low-power mixed signal unconventional computational elements developed by Johns Hopkins University (JHU), and demonstrate that capability on both fluid and particle Plasma codes. We will describe the JHU Mixed-signal Unconventional Supercomputing Elements (MUSE), and report initial results for the Lyon-Fedder-Mobarry (LFM) global magnetospheric MHD code, and a UCLA general purpose relativistic Particle-In-Cell (PIC) code.

  2. Elliptical orbit performance computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program which generates and plots elliptical orbit performance capability of space boosters for presentation purposes is described. Orbital performance capability of space boosters is typically presented as payload weight as a function of perigee and apogee altitudes. The parameters are derived from a parametric computer simulation of the booster flight which yields the payload weight as a function of velocity and altitude at insertion. The process of converting from velocity and altitude to apogee and perigee altitude and plotting the results as a function of payload weight is mechanized with the ELOPE program. The program theory, user instruction, input/output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  3. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  4. Extensions and improvements on XTRAN3S

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.

  5. NORTICA—a new code for cyclotron analysis

    NASA Astrophysics Data System (ADS)

    Gorelov, D.; Johnson, D.; Marti, F.

    2001-12-01

    The new package NORTICA (Numerical ORbit Tracking In Cyclotrons with Analysis) of computer codes for beam dynamics simulations is under development at NSCL. The package was started as a replacement for the code MONSTER [1] developed in the laboratory in the past. The new codes are capable of beam dynamics simulations in both CCF (Coupled Cyclotron Facility) accelerators, the K500 and K1200 superconducting cyclotrons. The general purpose of this package is assisting in setting and tuning the cyclotrons taking into account the main field and extraction channel imperfections. The computer platform for the package is Alpha Station with UNIX operating system and X-Windows graphic interface. A multiple programming language approach was used in order to combine the reliability of the numerical algorithms developed over the long period of time in the laboratory and the friendliness of modern style user interface. This paper describes the capability and features of the codes in the present state.

  6. Progress toward the development of an aircraft icing analysis capability

    NASA Technical Reports Server (NTRS)

    Shaw, R. J.

    1984-01-01

    An overview of the NASA efforts to develop an aircraft icing analysis capability is presented. Discussions are included of the overall and long term objectives of the program as well as current capabilities and limitations of the various computer codes being developed. Descriptions are given of codes being developed to analyze two and three dimensional trajectories of water droplets, airfoil ice accretion, aerodynamic performance degradation of components and complete aircraft configurations, electrothermal deicer, and fluid freezing point depressant deicer. The need for bench mark and verification data to support the code development is also discussed.

  7. Implementation of a tree algorithm in MCNP code for nuclear well logging applications.

    PubMed

    Li, Fusheng; Han, Xiaogang

    2012-07-01

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. An Assessment of Current Fan Noise Prediction Capability

    NASA Technical Reports Server (NTRS)

    Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.

    2008-01-01

    In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.

  9. Experimental and computational surface and flow-field results for an all-body hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Lockman, William K.; Lawrence, Scott L.; Cleary, Joseph W.

    1990-01-01

    The objective of the present investigation is to establish a benchmark experimental data base for a generic hypersonic vehicle shape for validation and/or calibration of advanced computational fluid dynamics computer codes. This paper includes results from the comprehensive test program conducted in the NASA/Ames 3.5-foot Hypersonic Wind Tunnel for a generic all-body hypersonic aircraft model. Experimental and computational results on flow visualization, surface pressures, surface convective heat transfer, and pitot-pressure flow-field surveys are presented. Comparisons of the experimental results with computational results from an upwind parabolized Navier-Stokes code developed at Ames demonstrate the capabilities of this code.

  10. Upwind MacCormack Euler solver with non-equilibrium chemistry

    NASA Technical Reports Server (NTRS)

    Sherer, Scott E.; Scott, James N.

    1993-01-01

    A computer code, designated UMPIRE, is currently under development to solve the Euler equations in two dimensions with non-equilibrium chemistry. UMPIRE employs an explicit MacCormack algorithm with dissipation introduced via Roe's flux-difference split upwind method. The code also has the capability to employ a point-implicit methodology for flows where stiffness is introduced through the chemical source term. A technique consisting of diagonal sweeps across the computational domain from each corner is presented, which is used to reduce storage and execution requirements. Results depicting one dimensional shock tube flow for both calorically perfect gas and thermally perfect, dissociating nitrogen are presented to verify current capabilities of the program. Also, computational results from a chemical reactor vessel with no fluid dynamic effects are presented to check the chemistry capability and to verify the point implicit strategy.

  11. ADPAC v1.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.

    1999-01-01

    The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.

  12. Fast Computation of the Two-Point Correlation Function in the Age of Big Data

    NASA Astrophysics Data System (ADS)

    Pellegrino, Andrew; Timlin, John

    2018-01-01

    We present a new code which quickly computes the two-point correlation function for large sets of astronomical data. This code combines the ease of use of Python with the speed of parallel shared libraries written in C. We include the capability to compute the auto- and cross-correlation statistics, and allow the user to calculate the three-dimensional and angular correlation functions. Additionally, the code automatically divides the user-provided sky masks into contiguous subsamples of similar size, using the HEALPix pixelization scheme, for the purpose of resampling. Errors are computed using jackknife and bootstrap resampling in a way that adds negligible extra runtime, even with many subsamples. We demonstrate comparable speed with other clustering codes, and code accuracy compared to known and analytic results.

  13. MODFLOW 2.0: A program for predicting moderator flow patterns

    NASA Astrophysics Data System (ADS)

    Peterson, P. F.; Paik, I. K.

    1991-07-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in the operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  14. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  15. Investigation of upwind, multigrid, multiblock numerical schemes for three dimensional flows. Volume 1: Runge-Kutta methods for a thin layer Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Cannizzaro, Frank E.; Ash, Robert L.

    1992-01-01

    A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.

  16. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code whichmore » has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker-Planck codes will advance computational modeling of plasma devices important to the USDOE magnetic fusion energy program, in particular the DIII-D tokamak at General Atomics, San Diego, the NSTX spherical tokamak at Princeton, New Jersey, and the MST reversed-field-pinch Madison, Wisconsin. The validation studies of the code against the experiments will improve understanding of physics important for magnetic fusion, and will increase our design capabilities for achieving the goals of the International Tokamak Experimental Reactor (ITER) project in which the US is a participant and which seeks to demonstrate at least a factor of five in fusion power production divided by input power.« less

  17. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  18. Data engineering systems: Computerized modeling and data bank capabilities for engineering analysis

    NASA Technical Reports Server (NTRS)

    Kopp, H.; Trettau, R.; Zolotar, B.

    1984-01-01

    The Data Engineering System (DES) is a computer-based system that organizes technical data and provides automated mechanisms for storage, retrieval, and engineering analysis. The DES combines the benefits of a structured data base system with automated links to large-scale analysis codes. While the DES provides the user with many of the capabilities of a computer-aided design (CAD) system, the systems are actually quite different in several respects. A typical CAD system emphasizes interactive graphics capabilities and organizes data in a manner that optimizes these graphics. On the other hand, the DES is a computer-aided engineering system intended for the engineer who must operationally understand an existing or planned design or who desires to carry out additional technical analysis based on a particular design. The DES emphasizes data retrieval in a form that not only provides the engineer access to search and display the data but also links the data automatically with the computer analysis codes.

  19. An analytical procedure and automated computer code used to design model nozzles which meet MSFC base pressure similarity parameter criteria. [space shuttle

    NASA Technical Reports Server (NTRS)

    Sulyma, P. R.

    1980-01-01

    Fundamental equations and similarity definition and application are described as well as the computational steps of a computer program developed to design model nozzles for wind tunnel tests conducted to define power-on aerodynamic characteristics of the space shuttle over a range of ascent trajectory conditions. The computer code capabilities, a user's guide for the model nozzle design program, and the output format are examined. A program listing is included.

  20. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  1. Computer code for analysing three-dimensional viscous flows in impeller passages and other duct geometries

    NASA Technical Reports Server (NTRS)

    Tatchell, D. G.

    1979-01-01

    A code, CATHY3/M, was prepared and demonstrated by application to a sample case. The preparation is reviewed, a summary of the capabilities and main features of the code is given, and the sample case results are discussed. Recommendations for future use and development of the code are provided.

  2. Methods of treating complex space vehicle geometry for charged particle radiation transport

    NASA Technical Reports Server (NTRS)

    Hill, C. W.

    1973-01-01

    Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.

  3. Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katzgraber, Helmut G.; Theoretische Physik, ETH Zurich, CH-8093 Zurich; Bombin, H.

    We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respectmore » to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.« less

  4. Comprehensive Micromechanics-Analysis Code - Version 4.0

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.; Bednarcyk, B. A.

    2005-01-01

    Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.

  5. Rotordynamics on the PC: Further Capabilities of ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Rotordynamics codes for personal computers are now becoming available. One of the most capable codes is Analysis of RotorDynamic Systems (ARDS) which uses the component mode synthesis method to analyze a system of up to 5 rotating shafts. ARDS was originally written for a mainframe computer but has been successfully ported to a PC; its basic capabilities for steady-state and transient analysis were reported in an earlier paper. Additional functions have now been added to the PC version of ARDS. These functions include: 1) Estimation of the peak response following blade loss without resorting to a full transient analysis; 2) Calculation of response sensitivity to input parameters; 3) Formulation of optimum rotor and damper designs to place critical speeds in desirable ranges or minimize bearing loads; 4) Production of Poincard plots so the presence of chaotic motion can be ascertained. ARDS produces printed and plotted output. The executable code uses the full array sizes of the mainframe version and fits on a high density floppy disc. Examples of all program capabilities are presented and discussed.

  6. Supersonic aerodynamic interference effects of store separation. Part 1: Computational analysis of cavity flowfields

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1986-01-01

    An explicit-implicit and an implicit two-dimensional Navier-Stokes code along with various grid generation capabilities were developed. A series of classical benckmark cases were simulated using these codes.

  7. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  8. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.

    1988-01-01

    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).

  9. Computer program for prediction of the deposition of material released from fixed and rotary wing aircraft

    NASA Technical Reports Server (NTRS)

    Teske, M. E.

    1984-01-01

    This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.

  10. Drekar v.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seefeldt, Ben; Sondak, David; Hensinger, David M.

    Drekar is an application code that solves partial differential equations for fluids that can be optionally coupled to electromagnetics. Drekar solves low-mach compressible and incompressible computational fluid dynamics (CFD), compressible and incompressible resistive magnetohydrodynamics (MHD), and multiple species plasmas interacting with electromagnetic fields. Drekar discretization technology includes continuous and discontinuous finite element formulations, stabilized finite element formulations, mixed integration finite element bases (nodal, edge, face, volume) and an initial arbitrary Lagrangian Eulerian (ALE) capability. Drekar contains the implementation of the discretized physics and leverages the open source Trilinos project for both parallel solver capabilities and general finite element discretization tools.more » The code will be released open source under a BSD license. The code is used for fundamental research for simulation of fluids and plasmas on high performance computing environments.« less

  11. Development and Demonstration of a Computational Tool for the Analysis of Particle Vitiation Effects in Hypersonic Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Perkins, Hugh Douglas

    2010-01-01

    In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.

  12. SCISEAL: A CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Althavale, Mahesh M.; Ho, Yin-Hsing; Przekwas, Andre J.

    1996-01-01

    A 3D CFD code, SCISEAL, has been developed and validated. Its capabilities include cylindrical seals, and it is employed on labyrinth seals, rim seals, and disc cavities. State-of-the-art numerical methods include colocated grids, high-order differencing, and turbulence models which account for wall roughness. SCISEAL computes efficient solutions for complicated flow geometries and seal-specific capabilities (rotor loads, torques, etc.).

  13. Automatic Data Traffic Control on DSM Architecture

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study data traffic on distributed shared memory machines and conclude that data placement and grouping improve performance of scientific codes. We present several methods which user can employ to improve data traffic in his code. We report on implementation of a tool which detects the code fragments causing data congestions and advises user on improvements of data routing in these fragments. The capabilities of the tool include deduction of data alignment and affinity from the source code; detection of the code constructs having abnormally high cache or TLB misses; generation of data placement constructs. We demonstrate the capabilities of the tool on experiments with NAS parallel benchmarks and with a simple computational fluid dynamics application ARC3D.

  14. Acoustic Prediction State of the Art Assessment

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2007-01-01

    The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.

  15. CFL3D Version 6.4-General Usage and Aeroelastic Analysis

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Rumsey, Christopher L.; Biedron, Robert T.

    2006-01-01

    This document contains the course notes on the computational fluid dynamics code CFL3D version 6.4. It is intended to provide from basic to advanced users the information necessary to successfully use the code for a broad range of cases. Much of the course covers capability that has been a part of previous versions of the code, with material compiled from a CFL3D v5.0 manual and from the CFL3D v6 web site prior to the current release. This part of the material is presented to users of the code not familiar with computational fluid dynamics. There is new capability in CFL3D version 6.4 presented here that has not previously been published. There are also outdated features no longer used or recommended in recent releases of the code. The information offered here supersedes earlier manuals and updates outdated usage. Where current usage supersedes older versions, notation of that is made. These course notes also provides hints for usage, code installation and examples not found elsewhere.

  16. Digital Image Correlation Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dan; Crozier, Paul; Reu, Phil

    DICe is an open source digital image correlation (DIC) tool intended for use as a module in an external application or as a standalone analysis code. It's primary capability is computing full-field displacements and strains from sequences of digital These images are typically of a material sample undergoing a materials characterization experiment, but DICe is also useful for other applications (for example, trajectory tracking). DICe is machine portable (Windows, Linux and Mac) and can be effectively deployed on a high performance computing platform. Capabilities from DICe can be invoked through a library interface, via source code integration of DICe classesmore » or through a graphical user interface.« less

  17. PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas

    The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less

  18. Development of V/STOL methodology based on a higher order panel method

    NASA Technical Reports Server (NTRS)

    Bhateley, I. C.; Howell, G. A.; Mann, H. W.

    1983-01-01

    The development of a computational technique to predict the complex flowfields of V/STOL aircraft was initiated in which a number of modules and a potential flow aerodynamic code were combined in a comprehensive computer program. The modules were developed in a building-block approach to assist the user in preparing the geometric input and to compute parameters needed to simulate certain flow phenomena that cannot be handled directly within a potential flow code. The PAN AIR aerodynamic code, which is higher order panel method, forms the nucleus of this program. PAN AIR's extensive capability for allowing generalized boundary conditions allows the modules to interact with the aerodynamic code through the input and output files, thereby requiring no changes to the basic code and easy replacement of updated modules.

  19. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  20. Development of advanced structural analysis methodologies for predicting widespread fatigue damage in aircraft structures

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.

    1995-01-01

    NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.

  1. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  2. Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold

    1997-01-01

    The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.

  3. Survey of computer programs for heat transfer analysis

    NASA Astrophysics Data System (ADS)

    Noor, A. K.

    An overview is presented of the current capabilities of thirty-eight computer programs that can be used for solution of heat transfer problems. These programs range from the large, general-purpose codes with a broad spectrum of capabilities, large user community and comprehensive user support (e.g., ANSYS, MARC, MITAS 2 MSC/NASTRAN, SESAM-69/NV-615) to the small, special purpose codes with limited user community such as ANDES, NNTB, SAHARA, SSPTA, TACO, TEPSA AND TRUMP. The capabilities of the programs surveyed are listed in tabular form followed by a summary of the major features of each program. As with any survey of computer programs, the present one has the following limitations: (1) It is useful only in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program; (2) Since computer software continually changes, often at a rapid rate, some means must be found for updating this survey and maintaining some degree of currency.

  4. Survey of computer programs for heat transfer analysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1982-01-01

    An overview is presented of the current capabilities of thirty-eight computer programs that can be used for solution of heat transfer problems. These programs range from the large, general-purpose codes with a broad spectrum of capabilities, large user community and comprehensive user support (e.g., ANSYS, MARC, MITAS 2 MSC/NASTRAN, SESAM-69/NV-615) to the small, special purpose codes with limited user community such as ANDES, NNTB, SAHARA, SSPTA, TACO, TEPSA AND TRUMP. The capabilities of the programs surveyed are listed in tabular form followed by a summary of the major features of each program. As with any survey of computer programs, the present one has the following limitations: (1) It is useful only in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program; (2) Since computer software continually changes, often at a rapid rate, some means must be found for updating this survey and maintaining some degree of currency.

  5. A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.

  6. CREME: The 2011 Revision of the Cosmic Ray Effects on Micro-Electronics Code

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Barghouty, Abdulnasser F.; Reed, Robert A.; Sierawski, Brian D.; Watts, John W., Jr.

    2012-01-01

    We describe a tool suite, CREME, which combines existing capabilities of CREME96 and CREME86 with new radiation environment models and new Monte Carlo computational capabilities for single event effects and total ionizing dose.

  7. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  8. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  9. SCISEAL: A CFD code for analysis of fluid dynamic forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh; Przekwas, Andrzej

    1994-01-01

    A viewgraph presentation is made of the objectives, capabilities, and test results of the computer code SCISEAL. Currently, the seal code has: a finite volume, pressure-based integration scheme; colocated variables with strong conservation approach; high-order spatial differencing, up to third-order; up to second-order temporal differencing; a comprehensive set of boundary conditions; a variety of turbulence models and surface roughness treatment; moving grid formulation for arbitrary rotor whirl; rotor dynamic coefficients calculated by the circular whirl and numerical shaker methods; and small perturbation capabilities to handle centered and eccentric seals.

  10. Capabilities of LEWICE 1.6 and Comparison With Experimental Data

    DOT National Transportation Integrated Search

    1996-01-01

    A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. The most recent release of this code is LEWICE 1.6. This paper will demonstr...

  11. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.

  12. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  13. Dexter - A one-dimensional code for calculating thermionic performance of long converters.

    NASA Technical Reports Server (NTRS)

    Sawyer, C. D.

    1971-01-01

    This paper describes a versatile code for computing the coupled thermionic electric-thermal performance of long thermionic converters in which the temperature and voltage variations cannot be neglected. The code is capable of accounting for a variety of external electrical connection schemes, coolant flow paths and converter failures by partial shorting. Example problem solutions are given.

  14. Performance analysis of a parallel Monte Carlo code for simulating solar radiative transfer in cloudy atmospheres using CUDA-enabled NVIDIA GPU

    NASA Astrophysics Data System (ADS)

    Russkova, Tatiana V.

    2017-11-01

    One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.

  15. User's manual: Subsonic/supersonic advanced panel pilot code

    NASA Technical Reports Server (NTRS)

    Moran, J.; Tinoco, E. N.; Johnson, F. T.

    1978-01-01

    Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.

  16. Computer program optimizes design of nuclear radiation shields

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1971-01-01

    Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.

  17. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  18. Centrifugal and Axial Pump Design and Off-Design Performance Prediction

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1995-01-01

    A meanline pump-flow modeling method has been developed to provide a fast capability for modeling pumps of cryogenic rocket engines. Based on this method, a meanline pump-flow code PUMPA was written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The design-point rotor efficiency and slip factors are obtained from empirical correlations to rotor-specific speed and geometry. The pump code can model axial, inducer, mixed-flow, and centrifugal pumps and can model multistage pumps in series. The rapid input setup and computer run time for this meanline pump flow code make it an effective analysis and conceptual design tool. The map-generation capabilities of the code provide the information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of PUMPA permit the user to do parametric design space exploration of candidate pump configurations and to provide head-flow maps for engine system evaluation.

  19. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  20. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less

  1. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  2. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  3. Increased Rail Transit Vehicle Crashworthiness in Head-On Collisions. Volume IV. TRAIN User's Manual.

    DOT National Transportation Integrated Search

    1980-06-01

    A specific goal of safety is to reduce the number of injuries that may result from the collision of two trains. In Volume IV, a computer code for the simulated crash of two railcar consists is described. The code is capable of simulating the mechanic...

  4. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  5. User's manual for three dimensional FDTD version A code for scattering from frequency-independent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Finite Difference Time Domain Electromagnetic Scattering Code Version A is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). This manual provides a description of the code and corresponding results for the default scattering problem. In addition to the description, the operation, resource requirements, version A code capabilities, a description of each subroutine, a brief discussion of the radar cross section computations, and a discussion of the scattering results.

  6. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  7. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  8. Collaborative Software Development in Support of Fast Adaptive AeroSpace Tools (FAAST)

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Nielsen, Eric J.; Gnoffo, Peter A.; Park, Michael A.; Wood, William A.

    2003-01-01

    A collaborative software development approach is described. The software product is an adaptation of proven computational capabilities combined with new capabilities to form the Agency's next generation aerothermodynamic and aerodynamic analysis and design tools. To efficiently produce a cohesive, robust, and extensible software suite, the approach uses agile software development techniques; specifically, project retrospectives, the Scrum status meeting format, and a subset of Extreme Programming's coding practices are employed. Examples are provided which demonstrate the substantial benefits derived from employing these practices. Also included is a discussion of issues encountered when porting legacy Fortran 77 code to Fortran 95 and a Fortran 95 coding standard.

  9. CFD Code Survey for Thrust Chamber Application

    NASA Technical Reports Server (NTRS)

    Gross, Klaus W.

    1990-01-01

    In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.

  10. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  11. DEXTER: A one-dimensional code for calculating thermionic performance of long converters

    NASA Technical Reports Server (NTRS)

    Sawyer, C. D.

    1971-01-01

    A versatile code is described for computing the coupled thermionic electric-thermal performance of long thermionic converters in which the temperature and voltage variations cannot be neglected. The code is capable of accounting for a variety of external electrical connection schemes, coolant flow paths and converter failures by partial shorting. Example problem solutions are included along with a user's manual.

  12. Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.

    1999-09-01

    A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preece, D.S.; Knudsen, S.D.

    The spherical element computer code DMC (Distinct Motion Code) used to model rock motion resulting from blasting has been enhanced to allow routine computer simulations of bench blasting. The enhancements required for bench blast simulation include: (1) modifying the gas flow portion of DMC, (2) adding a new explosive gas equation of state capability, (3) modifying the porosity calculation, and (4) accounting for blastwell spacing parallel to the face. A parametric study performed with DMC shows logical variation of the face velocity as burden, spacing, blastwell diameter and explosive type are varied. These additions represent a significant advance in themore » capability of DMC which will not only aid in understanding the physics involved in blasting but will also become a blast design tool. 8 refs., 7 figs., 1 tab.« less

  14. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  15. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  16. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  17. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  18. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  19. Global Magnetohydrodynamic Simulation Using High Performance FORTRAN on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Ogino, T.

    High Performance Fortran (HPF) is one of modern and common techniques to achieve high performance parallel computation. We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5 VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.

  20. Enhancement of the CAVE computer code

    NASA Astrophysics Data System (ADS)

    Rathjen, K. A.; Burk, H. O.

    1983-12-01

    The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.

  1. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattsson, Ann E.

    Density Functional Theory (DFT) based Equation of State (EOS) construction is a prominent part of Sandia’s capabilities to support engineering sciences. This capability is based on augmenting experimental data with information gained from computational investigations, especially in those parts of the phase space where experimental data is hard, dangerous, or expensive to obtain. A key part of the success of the Sandia approach is the fundamental science work supporting the computational capability. Not only does this work enhance the capability to perform highly accurate calculations but it also provides crucial insight into the limitations of the computational tools, providing highmore » confidence in the results even where results cannot be, or have not yet been, validated by experimental data. This report concerns the key ingredient of projector augmented-wave (PAW) potentials for use in pseudo-potential computational codes. Using the tools discussed in SAND2012-7389 we assess the standard Vienna Ab-initio Simulation Package (VASP) PAWs for Molybdenum.« less

  3. Introduction of the ASP3D Computer Program for Unsteady Aerodynamic and Aeroelastic Analyses

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    2005-01-01

    A new computer program has been developed called ASP3D (Advanced Small Perturbation 3D), which solves the small perturbation potential flow equation in an advanced form including mass-consistent surface and trailing wake boundary conditions, and entropy, vorticity, and viscous effects. The purpose of the program is for unsteady aerodynamic and aeroelastic analyses, especially in the nonlinear transonic flight regime. The program exploits the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP3D code is the result of a decade of developmental work on improvements to the small perturbation formulation, performed while the author was employed as a Senior Research Scientist in the Configuration Aerodynamics Branch at the NASA Langley Research Center. The ASP3D code is a significant improvement to the state-of-the-art for transonic aeroelastic analyses over the CAP-TSD code (Computational Aeroelasticity Program Transonic Small Disturbance), which was developed principally by the author in the mid-1980s. The author is in a unique position as the developer of both computer programs to compare, contrast, and ultimately make conclusions regarding the underlying formulations and utility of each code. The paper describes the salient features of the ASP3D code including the rationale for improvements in comparison with CAP-TSD. Numerous results are presented to demonstrate the ASP3D capability. The general conclusion is that the new ASP3D capability is superior to the older CAP-TSD code because of the myriad improvements developed and incorporated.

  4. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  5. Updated Panel-Method Computer Program

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1995-01-01

    Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.

  6. Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies

    NASA Technical Reports Server (NTRS)

    Reyhner, T. A.

    1982-01-01

    An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.

  7. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  8. SPAR improved structural-fluid dynamic analysis capability

    NASA Technical Reports Server (NTRS)

    Pearson, M. L.

    1985-01-01

    The results of a study whose objective was to improve the operation of the SPAR computer code by improving efficiency, user features, and documentation is presented. Additional capability was added to the SPAR arithmetic utility system, including trigonometric functions, numerical integration, interpolation, and matrix combinations. Improvements were made in the EIG processor. A processor was created to compute and store principal stresses in table-format data sets. An additional capability was developed and incorporated into the plot processor which permits plotting directly from table-format data sets. Documentation of all these features is provided in the form of updates to the SPAR users manual.

  9. Seals Flow Code Development

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.

  10. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  11. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  12. MPACT Standard Input User s Manual, Version 2.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.; Downar, Thomas; Fitzgerald, Andrew

    The MPACT (Michigan PArallel Charactistics based Transport) code is designed to perform high-fidelity light water reactor (LWR) analysis using whole-core pin-resolved neutron transport calculations on modern parallel-computing hardware. The code consists of several libraries which provide the functionality necessary to solve steady-state eigenvalue problems. Several transport capabilities are available within MPACT including both 2-D and 3-D Method of Characteristics (MOC). A three-dimensional whole core solution based on the 2D-1D solution method provides the capability for full core depletion calculations.

  13. Extreme Scale Computing to Secure the Nation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, D L; McGraw, J R; Johnson, J R

    2009-11-10

    Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national securitymore » requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the absence of nuclear testing, a progam to: (1) Support a focused, multifaceted program to increase the understanding of the enduring stockpile; (2) Predict, detect, and evaluate potential problems of the aging of the stockpile; (3) Refurbish and re-manufacture weapons and components, as required; and (4) Maintain the science and engineering institutions needed to support the nation's nuclear deterrent, now and in the future'. This program continues to fulfill its national security mission by adding significant new capabilities for producing scientific results through large-scale computational simulation coupled with careful experimentation, including sub-critical nuclear experiments permitted under the CTBT. To develop the computational science and the computational horsepower needed to support its mission, SBSS initiated the Accelerated Strategic Computing Initiative, later renamed the Advanced Simulation & Computing (ASC) program (sidebar: 'History of ASC Computing Program Computing Capability'). The modern 3D computational simulation capability of the ASC program supports the assessment and certification of the current nuclear stockpile through calibration with past underground test (UGT) data. While an impressive accomplishment, continued evolution of national security mission requirements will demand computing resources at a significantly greater scale than we have today. In particular, continued observance and potential Senate confirmation of the Comprehensive Test Ban Treaty (CTBT) together with the U.S administration's promise for a significant reduction in the size of the stockpile and the inexorable aging and consequent refurbishment of the stockpile all demand increasing refinement of our computational simulation capabilities. Assessment of the present and future stockpile with increased confidence of the safety and reliability without reliance upon calibration with past or future test data is a long-term goal of the ASC program. This will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).« less

  14. Development and application of structural dynamics analysis capabilities

    NASA Technical Reports Server (NTRS)

    Heinemann, Klaus W.; Hozaki, Shig

    1994-01-01

    Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.

  15. Simulation of short period Lg, expansion of three-dimensional source simulation capabilities and simulation of near-field ground motion from the 1971 San Fernando, California, earthquake. Final report 1 Oct 79-30 Nov 80

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, T.C.; Swanger, H.J.; Shkoller, B.

    1981-07-01

    This report summarizes three efforts performed during the past fiscal year. The first these efforts is a study of the theoretical behavior of the regional seismic phase Lg in various tectonic provinces. Synthetic seismograms are used to determine the sensitivity of Lg to source and medium properties. The primary issues addressed concern the relationship of regional Lg characteristics to the crustal attenuation properties, the comparison of the Lg in many crustal structures and the source depth dependence of Lg. The second effort described is an expansion of hte capabilities of the three-dimensional finite difference code TRES. The present capabilities aremore » outlined with comparisons of the performance of the code on three computer systems. The last effort described is the development of an algorithm for simulation of the near-field ground motions from the 1971 San Fernando, California, earthquake. A computer code implementing this algorithm has been provided to the Mission Research Corporation foe simulation of the acoustic disturbances from such an earthquake.« less

  16. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1991-01-01

    The development of a comprehensive fluid-structure interaction capability within a boundary element computer code is described. This new capability is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach. A number of detailed numerical examples are included at the end of these two sections to validate the formulations and to emphasize both the accuracy and generality of the computer code. A brief review of the recent applicable boundary element literature is included for completeness. The fluid-structure interaction facility is discussed. Once again, several examples are provided to highlight this unique capability. A collection of potential boundary element applications that have been uncovered as a result of work related to the present grant is given. For most of those problems, satisfactory analysis techniques do not currently exist.

  17. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    NASA Astrophysics Data System (ADS)

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.

  18. A Computational Method for Determining the Equilibrium Composition and Product Temperature in a LH2/LOX Combustor

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet

    2003-01-01

    In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.

  19. Additions and improvements to the high energy density physics capabilities in the FLASH code

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Flocke, N.; Graziani, C.; Tzeferacos, P.; Weide, K.

    2016-10-01

    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities have been added to FLASH to make it an open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. In particular, we showcase the ability of FLASH to simulate the Faraday Rotation Measure produced by the presence of magnetic fields; and proton radiography, proton self-emission, and Thomson scattering diagnostics with and without the presence of magnetic fields. We also describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under Grant PHY-0903997.

  20. Computational methods for coupling microstructural and micromechanical materials response simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were appliedmore » to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.« less

  1. Application of CFD codes to the design and development of propulsion systems

    NASA Technical Reports Server (NTRS)

    Lord, W. K.; Pickett, G. F.; Sturgess, G. J.; Weingold, H. D.

    1987-01-01

    The internal flows of aerospace propulsion engines have certain common features that are amenable to analysis through Computational Fluid Dynamics (CFD) computer codes. Although the application of CFD to engineering problems in engines was delayed by the complexities associated with internal flows, many codes with different capabilities are now being used as routine design tools. This is illustrated by examples taken from the aircraft gas turbine engine of flows calculated with potential flow, Euler flow, parabolized Navier-Stokes, and Navier-Stokes codes. Likely future directions of CFD applied to engine flows are described, and current barriers to continued progress are highlighted. The potential importance of the Numerical Aerodynamic Simulator (NAS) to resolution of these difficulties is suggested.

  2. Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin

    2000-01-01

    This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.

  3. Computational fluid dynamics - The coming revolution

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1982-01-01

    The development of aerodynamic theory is traced from the days of Aristotle to the present, with the next stage in computational fluid dynamics dependent on superspeed computers for flow calculations. Additional attention is given to the history of numerical methods inherent in writing computer codes applicable to viscous and inviscid analyses for complex configurations. The advent of the superconducting Josephson junction is noted to place configurational demands on computer design to avoid limitations imposed by the speed of light, and a Japanese projection of a computer capable of several hundred billion operations/sec is mentioned. The NASA Numerical Aerodynamic Simulator is described, showing capabilities of a billion operations/sec with a memory of 240 million words using existing technology. Near-term advances in fluid dynamics are discussed.

  4. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  5. Enhancement of the CAVE computer code. [aerodynamic heating package for nose cones and scramjet engine sidewalls

    NASA Technical Reports Server (NTRS)

    Rathjen, K. A.; Burk, H. O.

    1983-01-01

    The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.

  6. Bayesian Methods and Confidence Intervals for Automatic Target Recognition of SAR Canonical Shapes

    DTIC Science & Technology

    2014-03-27

    and DirectX [22]. The CUDA platform was developed by the NVIDIA Corporation to allow programmers access to the computational capabilities of the...were used for the intense repetitive computations. Developing CUDA software requires writing code for specialized compilers provided by NVIDIA and

  7. End-to-end plasma bubble PIC simulations on GPUs

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava

    2017-10-01

    Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.

  8. Improvements to a method for the geometrically nonlinear analysis of compressively loaded stiffened composite panels

    NASA Technical Reports Server (NTRS)

    Stoll, Frederick

    1993-01-01

    The NLPAN computer code uses a finite-strip approach to the analysis of thin-walled prismatic composite structures such as stiffened panels. The code can model in-plane axial loading, transverse pressure loading, and constant through-the-thickness thermal loading, and can account for shape imperfections. The NLPAN code represents an attempt to extend the buckling analysis of the VIPASA computer code into the geometrically nonlinear regime. Buckling mode shapes generated using VIPASA are used in NLPAN as global functions for representing displacements in the nonlinear regime. While the NLPAN analysis is approximate in nature, it is computationally economical in comparison with finite-element analysis, and is thus suitable for use in preliminary design and design optimization. A comprehensive description of the theoretical approach of NLPAN is provided. A discussion of some operational considerations for the NLPAN code is included. NLPAN is applied to several test problems in order to demonstrate new program capabilities, and to assess the accuracy of the code in modeling various types of loading and response. User instructions for the NLPAN computer program are provided, including a detailed description of the input requirements and example input files for two stiffened-panel configurations.

  9. BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package

    NASA Astrophysics Data System (ADS)

    Mitran, Sorin

    2011-04-01

    The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.

  10. Code White: A Signed Code Protection Mechanism for Smartphones

    DTIC Science & Technology

    2010-09-01

    analogous to computer security is the use of antivirus (AV) software . 12 AV software is a brute force approach to security. The software ...these users, numerous malicious programs have also surfaced. And while smartphones have desktop-like capabilities to execute software , they do not...11 2.3.1 Antivirus and Mobile Phones ............................................................... 11 2.3.2

  11. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  12. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  13. RELAP-7 Closure Correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Berry, R. A.; Martineau, R. C.

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s and TRACE’s capabilities and extends their analysis capabilities for all reactor system simulation scenarios. The RELAP-7 codemore » utilizes the well-posed 7-equation two-phase flow model for compressible two-phase flow. Closure models used in the TRACE code has been reviewed and selected to reflect the progress made during the past decades and provide a basis for the colure correlations implemented in the RELAP-7 code. This document provides a summary on the closure correlations that are currently implemented in the RELAP-7 code. The closure correlations include sub-grid models that describe interactions between the fluids and the flow channel, and interactions between the two phases.« less

  14. HART-II Acoustic Predictions using a Coupled CFD/CSD Method

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.

    2009-01-01

    This paper documents results to date from the Rotorcraft Acoustic Characterization and Mitigation activity under the NASA Subsonic Rotary Wing Project. The primary goal of this activity is to develop a NASA rotorcraft impulsive noise prediction capability which uses first principles fluid dynamics and structural dynamics. During this effort, elastic blade motion and co-processing capabilities have been included in a recent version of the computational fluid dynamics code (CFD). The CFD code is loosely coupled to computational structural dynamics (CSD) code using new interface codes. The CFD/CSD coupled solution is then used to compute impulsive noise on a plane under the rotor using the Ffowcs Williams-Hawkings solver. This code system is then applied to a range of cases from the Higher Harmonic Aeroacoustic Rotor Test II (HART-II) experiment. For all cases presented, the full experimental configuration (i.e., rotor and wind tunnel sting mount) are used in the coupled CFD/CSD solutions. Results show good correlation between measured and predicted loading and loading time derivative at the only measured radial station. A contributing factor for a typically seen loading mean-value offset between measured data and predictions data is examined. Impulsive noise predictions on the measured microphone plane under the rotor compare favorably with measured mid-frequency noise for all cases. Flow visualization of the BL and MN cases shows that vortex structures generated in the prediction method are consist with measurements. Future application of the prediction method is discussed.

  15. CHEETAH: A fast thermochemical code for detonation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.E.

    1993-11-01

    For more than 20 years, TIGER has been the benchmark thermochemical code in the energetic materials community. TIGER has been widely used because it gives good detonation parameters in a very short period of time. Despite its success, TIGER is beginning to show its age. The program`s chemical equilibrium solver frequently crashes, especially when dealing with many chemical species. It often fails to find the C-J point. Finally, there are many inconveniences for the user stemming from the programs roots in pre-modern FORTRAN. These inconveniences often lead to mistakes in preparing input files and thus erroneous results. We are producingmore » a modern version of TIGER, which combines the best features of the old program with new capabilities, better computational algorithms, and improved packaging. The new code, which will evolve out of TIGER in the next few years, will be called ``CHEETAH.`` Many of the capabilities that will be put into CHEETAH are inspired by the thermochemical code CHEQ. The new capabilities of CHEETAH are: calculate trace levels of chemical compounds for environmental analysis; kinetics capability: CHEETAH will predict chemical compositions as a function of time given individual chemical reaction rates. Initial application: carbon condensation; CHEETAH will incorporate partial reactions; CHEETAH will be based on computer-optimized JCZ3 and BKW parameters. These parameters will be fit to over 20 years of data collected at LLNL. We will run CHEETAH thousands of times to determine the best possible parameter sets; CHEETAH will fit C-J data to JWL`s,and also predict full-wall and half-wall cylinder velocities.« less

  16. An evaluation of TRAC-PF1/MOD1 computer code performance during posttest simulations of Semiscale MOD-2C feedwater line break transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.: Watkins, J.C.

    This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less

  17. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less

  18. Numerical simulation of turbulent jet noise, part 2

    NASA Technical Reports Server (NTRS)

    Metcalfe, R. W.; Orszag, S. A.

    1976-01-01

    Results on the numerical simulation of jet flow fields were used to study the radiated sound field, and in addition, to extend and test the capabilities of the turbulent jet simulation codes. The principal result of the investigation was the computation of the radiated sound field from a turbulent jet. In addition, the computer codes were extended to account for the effects of compressibility and eddy viscosity, and the treatment of the nonlinear terms of the Navier-Stokes equations was modified so that they can be computed in a semi-implicit way. A summary of the flow model and a description of the numerical methods used for its solution are presented. Calculations of the radiated sound field are reported. In addition, the extensions that were made to the fundamental dynamical codes are described. Finally, the current state-of-the-art for computer simulation of turbulent jet noise is summarized.

  19. RELAP-7 Software Verification and Validation Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Curtis L.; Choi, Yong-Joon; Zou, Ling

    This INL plan comprehensively describes the software for RELAP-7 and documents the software, interface, and software design requirements for the application. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7. The RELAP-7 (Reactor Excursion and Leak Analysis Program) code is a nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on the INL’s modern scientific software development framework – MOOSE (Multi-Physics Object-Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty yearsmore » of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5’s capability and extends the analysis capability for all reactor system simulation scenarios.« less

  20. Enhancing the ABAQUS Thermomechanics Code to Simulate Steady and Transient Fuel Rod Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Williamson; D. A. Knoll

    2009-09-01

    A powerful multidimensional fuels performance capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth , gap heat transfer, and gap/plenum gas behavior during irradiation. The various modeling capabilities are demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multi-pellet fuel rod, during both steady and transient operation. Computational results demonstrate the importancemore » of a multidimensional fully-coupled thermomechanics treatment. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermo-mechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.« less

  1. User's Guide for ENSAERO_FE Parallel Finite Element Solver

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.; Guruswamy, Guru P.

    1999-01-01

    A high fidelity parallel static structural analysis capability is created and interfaced to the multidisciplinary analysis package ENSAERO-MPI of Ames Research Center. This new module replaces ENSAERO's lower fidelity simple finite element and modal modules. Full aircraft structures may be more accurately modeled using the new finite element capability. Parallel computation is performed by breaking the full structure into multiple substructures. This approach is conceptually similar to ENSAERO's multizonal fluid analysis capability. The new substructure code is used to solve the structural finite element equations for each substructure in parallel. NASTRANKOSMIC is utilized as a front end for this code. Its full library of elements can be used to create an accurate and realistic aircraft model. It is used to create the stiffness matrices for each substructure. The new parallel code then uses an iterative preconditioned conjugate gradient method to solve the global structural equations for the substructure boundary nodes.

  2. Analysis of internal flows relative to the space shuttle main engine

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Cooperative efforts between the Lockheed-Huntsville Computational Mechanics Group and the NASA-MSFC Computational Fluid Dynamics staff has resulted in improved capabilities for numerically simulating incompressible flows generic to the Space Shuttle Main Engine (SSME). A well established and documented CFD code was obtained, modified, and applied to laminar and turbulent flows of the type occurring in the SSME Hot Gas Manifold. The INS3D code was installed on the NASA-MSFC CRAY-XMP computer system and is currently being used by NASA engineers. Studies to perform a transient analysis of the FPB were conducted. The COBRA/TRAC code is recommended for simulating the transient flow of oxygen into the LOX manifold. Property data for modifying the code to represent LOX/GOX flow was collected. The ALFA code was developed and recommended for representing the transient combustion in the preburner. These two codes will couple through the transient boundary conditions to simulate the startup and/or shutdown of the fuel preburner. A study, NAS8-37461, is currently being conducted to implement this modeling effort.

  3. Lewis Structures Technology, 1988. Volume 2: Structural Mechanics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Lewis Structures Div. performs and disseminates results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practitioners of structural engineering mechanics beyond the aerospace arena. The engineering community was familiarized with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.

  4. Probabilistic evaluation of fuselage-type composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1992-01-01

    A methodology is developed to computationally simulate the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in the IPACS (Integrated Probabilistic Assessment of Composite Structures) computer code. A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.

  5. Survey of computer programs for heat transfer analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1986-01-01

    An overview is given of the current capabilities of thirty-three computer programs that are used to solve heat transfer problems. The programs considered range from large general-purpose codes with broad spectrum of capabilities, large user community, and comprehensive user support (e.g., ABAQUS, ANSYS, EAL, MARC, MITAS II, MSC/NASTRAN, and SAMCEF) to the small, special-purpose codes with limited user community such as ANDES, NTEMP, TAC2D, TAC3D, TEPSA and TRUMP. The majority of the programs use either finite elements or finite differences for the spatial discretization. The capabilities of the programs are listed in tabular form followed by a summary of the major features of each program. The information presented herein is based on a questionnaire sent to the developers of each program. This information is preceded by a brief background material needed for effective evaluation and use of computer programs for heat transfer analysis. The present survey is useful in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program.

  6. Assessment of Spacecraft Systems Integration Using the Electric Propulsion Interactions Code (EPIC)

    NASA Technical Reports Server (NTRS)

    Mikellides, Ioannis G.; Kuharski, Robert A.; Mandell, Myron J.; Gardner, Barbara M.; Kauffman, William J. (Technical Monitor)

    2002-01-01

    SAIC is currently developing the Electric Propulsion Interactions Code 'EPIC', an interactive computer tool that allows the construction of a 3-D spacecraft model, and the assessment of interactions between its subsystems and the plume from an electric thruster. EPIC unites different computer tools to address the complexity associated with the interaction processes. This paper describes the overall architecture and capability of EPIC including the physics and algorithms that comprise its various components. Results from selected modeling efforts of different spacecraft-thruster systems are also presented.

  7. Turbofan Duct Propagation Model

    NASA Technical Reports Server (NTRS)

    Lan, Justin H.; Posey, Joe W. (Technical Monitor)

    2001-01-01

    The CDUCT code utilizes a parabolic approximation to the convected Helmholtz equation in order to efficiently model acoustic propagation in acoustically treated, complex shaped ducts. The parabolic approximation solves one-way wave propagation with a marching method which neglects backwards reflected waves. The derivation of the parabolic approximation is presented. Several code validation cases are given. An acoustic lining design process for an example aft fan duct is discussed. It is noted that the method can efficiently model realistic three-dimension effects, acoustic lining, and flow within the computational capabilities of a typical computer workstation.

  8. TEMPEST: A computer code for three-dimensional analysis of transient fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fort, J.A.

    TEMPEST (Transient Energy Momentum and Pressure Equations Solutions in Three dimensions) is a powerful tool for solving engineering problems in nuclear energy, waste processing, chemical processing, and environmental restoration because it analyzes and illustrates 3-D time-dependent computational fluid dynamics and heat transfer analysis. It is a family of codes with two primary versions, a N- Version (available to public) and a T-Version (not currently available to public). This handout discusses its capabilities, applications, numerical algorithms, development status, and availability and assistance.

  9. Validation of MCNP6 Version 1.0 with the ENDF/B-VII.1 Cross Section Library for Uranium Metal, Oxide, and Solution Systems on the High Performance Computing Platform Moonlight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Bryan Scott; MacQuigg, Michael Robert; Wysong, Andrew Russell

    In this document, the code MCNP is validated with ENDF/B-VII.1 cross section data under the purview of ANSI/ANS-8.24-2007, for use with uranium systems. MCNP is a computer code based on Monte Carlo transport methods. While MCNP has wide reading capability in nuclear transport simulation, this validation is limited to the functionality related to neutron transport and calculation of criticality parameters such as k eff.

  10. Thermodynamic equilibrium-air correlations for flowfield applications

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.

    1981-01-01

    Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.

  11. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The fourth year of technical developments on the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) system for Probabilistic Structural Analysis Methods is summarized. The effort focused on the continued expansion of the Probabilistic Finite Element Method (PFEM) code, the implementation of the Probabilistic Boundary Element Method (PBEM), and the implementation of the Probabilistic Approximate Methods (PAppM) code. The principal focus for the PFEM code is the addition of a multilevel structural dynamics capability. The strategy includes probabilistic loads, treatment of material, geometry uncertainty, and full probabilistic variables. Enhancements are included for the Fast Probability Integration (FPI) algorithms and the addition of Monte Carlo simulation as an alternate. Work on the expert system and boundary element developments continues. The enhanced capability in the computer codes is validated by applications to a turbine blade and to an oxidizer duct.

  12. A Numerical Study of the Effects of Curvature and Convergence on Dilution Jet Mixing

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Reynolds, R.; White, C.

    1987-01-01

    An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.

  13. A numerical study of the effects of curvature and convergence on dilution jet mixing

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Reynolds, R.; White, C.

    1987-01-01

    An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.

  14. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less

  15. Fluid Flow Investigations within a 37 Element CANDU Fuel Bundle Supported by Magnetic Resonance Velocimetry and Computational Fluid Dynamics

    DOE PAGES

    Piro, M.H.A; Wassermann, F.; Grundmann, S.; ...

    2017-05-23

    The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less

  16. Fluid Flow Investigations within a 37 Element CANDU Fuel Bundle Supported by Magnetic Resonance Velocimetry and Computational Fluid Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piro, M.H.A; Wassermann, F.; Grundmann, S.

    The current work presents experimental and computational investigations of fluid flow through a 37 element CANDU nuclear fuel bundle. Experiments based on Magnetic Resonance Velocimetry (MRV) permit three-dimensional, three-component fluid velocity measurements to be made within the bundle with sub-millimeter resolution that are non-intrusive, do not require tracer particles or optical access of the flow field. Computational fluid dynamic (CFD) simulations of the foregoing experiments were performed with the hydra-th code using implicit large eddy simulation, which were in good agreement with experimental measurements of the fluid velocity. Greater understanding has been gained in the evolution of geometry-induced inter-subchannel mixing,more » the local effects of obstructed debris on the local flow field, and various turbulent effects, such as recirculation, swirl and separation. These capabilities are not available with conventional experimental techniques or thermal-hydraulic codes. Finally, the overall goal of this work is to continue developing experimental and computational capabilities for further investigations that reliably support nuclear reactor performance and safety.« less

  17. Micromechanics Analysis Code Post-Processing (MACPOST) User Guide. 1.0

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Comiskey, Michele D.; Bednarcyk, Brett A.

    1999-01-01

    As advanced composite materials have gained wider usage. the need for analytical models and computer codes to predict the thermomechanical deformation response of these materials has increased significantly. Recently, a micromechanics technique called the generalized method of cells (GMC) has been developed, which has the capability to fulfill this -oal. Tc provide a framework for GMC, the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) has been developed. As MAC/GMC has been updated, significant improvements have been made to the post-processing capabilities of the code. Through the MACPOST program, which operates directly within the MSC/PATRAN graphical pre- and post-processing package, a direct link between the analysis capabilities of MAC/GMC and the post-processing capabilities of MSC/PATRAN has been established. MACPOST has simplified the production, printing. and exportation of results for unit cells analyzed by MAC/GMC. MACPOST allows different micro-level quantities to be plotted quickly and easily in contour plots. In addition, meaningful data for X-Y plots can be examined. MACPOST thus serves as an important analysis and visualization tool for the macro- and micro-level data generated by MAC/GMC. This report serves as the user's manual for the MACPOST program.

  18. Development and application of numerical techniques for general-relativistic magnetohydrodynamics simulations of black hole accretion

    NASA Astrophysics Data System (ADS)

    White, Christopher Joseph

    We describe the implementation of sophisticated numerical techniques for general-relativistic magnetohydrodynamics simulations in the Athena++ code framework. Improvements over many existing codes include the use of advanced Riemann solvers and of staggered-mesh constrained transport. Combined with considerations for computational performance and parallel scalability, these allow us to investigate black hole accretion flows with unprecedented accuracy. The capability of the code is demonstrated by exploring magnetically arrested disks.

  19. Assessment of the MHD capability in the ATHENA code using data from the ALEX (Argonne Liquid Metal Experiment) facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1988-10-28

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility. 13 refs., 4more » figs., 2 tabs.« less

  20. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  1. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  2. Extending the imaging volume for biometric iris recognition.

    PubMed

    Narayanswamy, Ramkumar; Johnson, Gregory E; Silveira, Paulo E X; Wach, Hans B

    2005-02-10

    The use of the human iris as a biometric has recently attracted significant interest in the area of security applications. The need to capture an iris without active user cooperation places demands on the optical system. Unlike a traditional optical design, in which a large imaging volume is traded off for diminished imaging resolution and capacity for collecting light, Wavefront Coded imaging is a computational imaging technology capable of expanding the imaging volume while maintaining an accurate and robust iris identification capability. We apply Wavefront Coded imaging to extend the imaging volume of the iris recognition application.

  3. Software Development Processes Applied to Computational Icing Simulation

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Potapezuk, Mark G.; Mellor, Pamela A.

    1999-01-01

    The development of computational icing simulation methods is making the transition form the research to common place use in design and certification efforts. As such, standards of code management, design validation, and documentation must be adjusted to accommodate the increased expectations of the user community with respect to accuracy, reliability, capability, and usability. This paper discusses these concepts with regard to current and future icing simulation code development efforts as implemented by the Icing Branch of the NASA Lewis Research Center in collaboration with the NASA Lewis Engineering Design and Analysis Division. With the application of the techniques outlined in this paper, the LEWICE ice accretion code has become a more stable and reliable software product.

  4. Upgrades of Two Computer Codes for Analysis of Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Liou, Meng-Sing

    2005-01-01

    Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.

  5. Analytical determination of propeller performance degradation due to ice accretion

    NASA Technical Reports Server (NTRS)

    Miller, T. L.

    1986-01-01

    A computer code has been developed which is capable of computing propeller performance for clean, glaze, or rime iced propeller configurations, thereby providing a mechanism for determining the degree of performance degradation which results from a given icing encounter. The inviscid, incompressible flow field at each specified propeller radial location is first computed using the Theodorsen transformation method of conformal mapping. A droplet trajectory computation then calculates droplet impingement points and airfoil collection efficiency for each radial location, at which point several user-selectable empirical correlations are available for determining the aerodynamic penalities which arise due to the ice accretion. Propeller performance is finally computed using strip analysis for either the clean or iced propeller. In the iced mode, the differential thrust and torque coefficient equations are modified by the drag and lift coefficient increments due to ice to obtain the appropriate iced values. Comparison with available experimental propeller icing data shows good agreement in several cases. The code's capability to properly predict iced thrust coefficient, power coefficient, and propeller efficiency is shown to be dependent on the choice of empirical correlation employed as well as proper specification of radial icing extent.

  6. Xyce parallel electronic simulator : users' guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less

  7. An evaluation of a computer code based on linear acoustic theory for predicting helicopter main rotor noise

    NASA Astrophysics Data System (ADS)

    Davis, S. J.; Egolf, T. A.

    1980-07-01

    Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.

  8. Code for Multiblock CFD and Heat-Transfer Computations

    NASA Technical Reports Server (NTRS)

    Fabian, John C.; Heidmann, James D.; Lucci, Barbara L.; Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur

    2006-01-01

    The NASA Glenn Research Center General Multi-Block Navier-Stokes Convective Heat Transfer Code, Glenn-HT, has been used extensively to predict heat transfer and fluid flow for a variety of steady gas turbine engine problems. Recently, the Glenn-HT code has been completely rewritten in Fortran 90/95, a more object-oriented language that allows programmers to create code that is more modular and makes more efficient use of data structures. The new implementation takes full advantage of the capabilities of the Fortran 90/95 programming language. As a result, the Glenn-HT code now provides dynamic memory allocation, modular design, and unsteady flow capability. This allows for the heat-transfer analysis of a full turbine stage. The code has been demonstrated for an unsteady inflow condition, and gridding efforts have been initiated for a full turbine stage unsteady calculation. This analysis will be the first to simultaneously include the effects of rotation, blade interaction, film cooling, and tip clearance with recessed tip on turbine heat transfer and cooling performance. Future plans call for the application of the new Glenn-HT code to a range of gas turbine engine problems of current interest to the heat-transfer community. The new unsteady flow capability will allow researchers to predict the effect of unsteady flow phenomena upon the convective heat transfer of turbine blades and vanes. Work will also continue on the development of conjugate heat-transfer capability in the code, where simultaneous solution of convective and conductive heat-transfer domains is accomplished. Finally, advanced turbulence and fluid flow models and automatic gridding techniques are being developed that will be applied to the Glenn-HT code and solution process.

  9. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system components at each iterative step, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification. In particular, computational expense is reduced and the rate of convergence is enhanced. Furthermore, the software has demonstrated the ability to solve systems involving as many as 118 component elements. An early version of the code has already been integrated into the Advanced Multi-Physics (AMP) code under development by the Oak Ridge National Laboratory, Los Alamos National Laboratory, Idaho National Laboratory and Argonne National Laboratory. Keywords: Engineering, Nuclear -- 0552, Engineering, Material Science -- 0794, Chemistry, Mathematics -- 0405, Computer Science -- 0984

  10. Additions and improvements to the high energy density physics capabilities in the FLASH code

    NASA Astrophysics Data System (ADS)

    Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.

    2017-10-01

    FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.

  11. Science and technology review, March 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, R.

    The articles in this month`s issue are entitled Site 300`s New Contained Firing Facility, Computational Electromagnetics: Codes and Capabilities, Ergonomics Research:Impact on Injuries, and The Linear Electric Motor: Instability at 1,000 g`s.

  12. Internet Protocol Over Telemetry Testing for Earth Science Capability Demo Summary

    NASA Technical Reports Server (NTRS)

    Franz, Russ; Pestana, Mark; Bessent, Shedrick; Hang, Richard; Ng, Howard

    2006-01-01

    The development and flight tests described here focused on utilizing existing pulse code modulation (PCM) telemetry equipment to enable on-vehicle networks of instruments and computers to be a simple extension of the ground station network. This capability is envisioned as a necessary component of a global range that supports test and development of manned and unmanned airborne vehicles.

  13. A Comprehensive Validation Approach Using The RAVEN Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J

    2015-06-01

    The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brochard, J.; Charras, T.; Ghoudi, M.

    Modifications to a computer code for ductile fracture assessment of piping systems with postulated circumferential through-wall cracks under static or dynamic loading are very briefly described. The modifications extend the capabilities of the CASTEM2000 code to the determination of fracture parameters under creep conditions. The main advantage of the approach is that thermal loads can be evaluated as secondary stresses. The code is applicable to piping systems for which crack propagation predictions differ significantly depending on whether thermal stresses are considered as primary or secondary stresses.

  15. TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.

    The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.

  16. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  17. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  18. Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Rehder, Joe

    2000-01-01

    Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same results as their standalone counterparts. Finally, a Commercial Off the Shelf (COTS) configuration management system was used to organize the software development. A computational environment, CJOPT, based on the Common Object Request Broker Architecture, CORBA, and the Java programming language has been developed as a framework for multidisciplinary analysis and Optimization. The environment exploits the parallelisms inherent in the application and distributes the constituent disciplines on machines best suited to their needs. In CJOpt, a discipline code is "wrapped" as an object. An interface to the object identifies the functionality (services) provided by the discipline, defined in Interface Definition Language (IDL) and implemented using Java. The results of using the HSCT4.0 capability are described. A summary of lessons learned is also presented. The use of some of the processes, codes, and techniques by industry are highlighted. The application of the methodology developed in this research to other aircraft are described. Finally, we show how the experience gained is being applied to entirely new vehicles, such as the Reusable Space Transportation System. Additional information is contained in the original.

  19. Impact design methods for ceramic components in gas turbine engines

    NASA Technical Reports Server (NTRS)

    Song, J.; Cuccio, J.; Kington, H.

    1991-01-01

    Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.

  20. THREED: A computer program for three dimensional transformation of coordinates. [in lunar photo triangulation mapping

    NASA Technical Reports Server (NTRS)

    Wong, K. W.

    1974-01-01

    Program THREED was developed for the purpose of a research study on the treatment of control data in lunar phototriangulation. THREED is the code name of a computer program for performing absolute orientation by the method of three-dimensional projective transformation. It has the capability of performing complete error analysis on the computed transformation parameters as well as the transformed coordinates.

  1. Evaluation of the constant pressure panel method (CPM) for unsteady air loads prediction

    NASA Technical Reports Server (NTRS)

    Appa, Kari; Smith, Michael J. C.

    1988-01-01

    This paper evaluates the capability of the constant pressure panel method (CPM) code to predict unsteady aerodynamic pressures, lift and moment distributions, and generalized forces for general wing-body configurations in supersonic flow. Stability derivatives are computed and correlated for the X-29 and an Oblique Wing Research Aircraft, and a flutter analysis is carried out for a wing wind tunnel test example. Most results are shown to correlate well with test or published data. Although the emphasis of this paper is on evaluation, an improvement in the CPM code's handling of intersecting lifting surfaces is briefly discussed. An attractive feature of the CPM code is that it shares the basic data requirements and computational arrangements of the doublet lattice method. A unified code to predict unsteady subsonic or supersonic airloads is therefore possible.

  2. The use of the SRIM code for calculation of radiation damage induced by neutrons

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi

    2017-12-01

    Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.

  3. Xyce parallel electronic simulator users guide, version 6.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  4. Xyce parallel electronic simulator users' guide, Version 6.0.1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  5. Xyce parallel electronic simulator users guide, version 6.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  6. CFD Activity at Aerojet Related to Seals and Fluid Film Bearing

    NASA Technical Reports Server (NTRS)

    Bache, George E.

    1991-01-01

    Computational Fluid Dynamics (CFD) activities related to seals and fluid film bearings are presented. Among the topics addressed are the following: Aerovisc Numeric and its capabilities; Recent Seal Applications; and Future Code Developments.

  7. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  8. Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William

    2000-01-01

    This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.

  9. Nonlinear analysis for high-temperature multilayered fiber composite structures. M.S. Thesis; [turbine blades

    NASA Technical Reports Server (NTRS)

    Hopkins, D. A.

    1984-01-01

    A unique upward-integrated top-down-structured approach is presented for nonlinear analysis of high-temperature multilayered fiber composite structures. Based on this approach, a special purpose computer code was developed (nonlinear COBSTRAN) which is specifically tailored for the nonlinear analysis of tungsten-fiber-reinforced superalloy (TFRS) composite turbine blade/vane components of gas turbine engines. Special features of this computational capability include accounting of; micro- and macro-heterogeneity, nonlinear (stess-temperature-time dependent) and anisotropic material behavior, and fiber degradation. A demonstration problem is presented to mainfest the utility of the upward-integrated top-down-structured approach, in general, and to illustrate the present capability represented by the nonlinear COBSTRAN code. Preliminary results indicate that nonlinear COBSTRAN provides the means for relating the local nonlinear and anisotropic material behavior of the composite constituents to the global response of the turbine blade/vane structure.

  10. Application of CFD to the analysis and design of high-speed inlets

    NASA Technical Reports Server (NTRS)

    Rose, William C.

    1995-01-01

    Over the past seven years, efforts under the present Grant have been aimed at being able to apply modern Computational Fluid Dynamics to the design of high-speed engine inlets. In this report, a review of previous design capabilities (prior to the advent of functioning CFD) was presented and the example of the NASA 'Mach 5 inlet' design was given as the premier example of the historical approach to inlet design. The philosophy used in the Mach 5 inlet design was carried forward in the present study, in which CFD was used to design a new Mach 10 inlet. An example of an inlet redesign was also shown. These latter efforts were carried out using today's state-of-the-art, full computational fluid dynamics codes applied in an iterative man-in-the-loop technique. The potential usefulness of an automated machine design capability using an optimizer code was also discussed.

  11. Comparison of COBRA III-C and SABRE-1 (wire-wrap version) computational results with steady-state data from a 19-pin internally guard heated sodium-cooled bundle with a six-channel central blockage (THORS Bundle 3C). [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dearing, J F; Nelson, W R; Rose, S D

    Computational thermal-hydraulic models of a 19-pin, electrically heated, wire-wrap liquid-metal fast breeder reactor test bundle were developed using two well-known subchannel analysis codes, COBRA III-C and SABRE-1 (wire-wrap version). These two codes use similar subchannel control volumes for the finite difference conservation equations but vary markedly in solution strategy and modeling capability. In particular, the empirical wire-wrap-forced diversion crossflow models are different. Surprisingly, however, crossflow velocity predictions of the two codes are very similar. Both codes show generally good agreement with experimental temperature data from a test in which a large radial temperature gradient was imposed. Differences between data andmore » code results are probably caused by experimental pin bowing, which is presently the limiting factor in validating coded empirical models.« less

  12. LeRC-HT: NASA Lewis Research Center General Multiblock Navier-Stokes Heat Transfer Code Developed

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Gaugler, Raymond E.

    1999-01-01

    For the last several years, LeRC-HT, a three-dimensional computational fluid dynamics (CFD) computer code for analyzing gas turbine flow and convective heat transfer, has been evolving at the NASA Lewis Research Center. The code is unique in its ability to give a highly detailed representation of the flow field very close to solid surfaces. This is necessary for an accurate representation of fluid heat transfer and viscous shear stresses. The code has been used extensively for both internal cooling passage flows and hot gas path flows--including detailed film cooling calculations, complex tip-clearance gap flows, and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool (at least 35 technical papers have been published relative to the code and its application), but it should be useful for detailed design analysis. We now plan to make this code available to selected users for further evaluation.

  13. Wasatch: An architecture-proof multiphysics development environment using a Domain Specific Language and graph theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Tony; Sutherland, James C.

    To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less

  14. Wasatch: An architecture-proof multiphysics development environment using a Domain Specific Language and graph theory

    DOE PAGES

    Saad, Tony; Sutherland, James C.

    2016-05-04

    To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less

  15. Numerical studies of the deposition of material released from fixed and rotary wing aircraft

    NASA Technical Reports Server (NTRS)

    Bilanin, A. J.; Teske, M. E.

    1984-01-01

    The computer code AGDISP (AGricultural DISPersal) has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern. In this report, the equations governing the motion of aerially released particles are developed, including a description of the evaporation model used. A series of case studies, using AGDISP, are included.

  16. Advanced turboprop noise prediction based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Padula, S. L.; Dunn, M. H.

    1987-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  17. Current and anticipated uses of thermal-hydraulic codes in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  18. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  19. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE PAGES

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  20. Programmable Pulse-Position-Modulation Encoder

    NASA Technical Reports Server (NTRS)

    Zhu, David; Farr, William

    2006-01-01

    A programmable pulse-position-modulation (PPM) encoder has been designed for use in testing an optical communication link. The encoder includes a programmable state machine and an electronic code book that can be updated to accommodate different PPM coding schemes. The encoder includes a field-programmable gate array (FPGA) that is programmed to step through the stored state machine and code book and that drives a custom high-speed serializer circuit board that is capable of generating subnanosecond pulses. The stored state machine and code book can be updated by means of a simple text interface through the serial port of a personal computer.

  1. A simple code for use in shielding and radiation dosage analyses

    NASA Technical Reports Server (NTRS)

    Wan, C. C.

    1972-01-01

    A simple code for use in analyses of gamma radiation effects in laminated materials is described. Simple and good geometry is assumed so that all multiple collision and scattering events are excluded from consideration. The code is capable of handling laminates up to six layers. However, for laminates of more than six layers, the same code may be used to incorporate two additional layers at a time, making use of punch-tape outputs from previous computation on all preceding layers. Spectrum of attenuated radiation are obtained as both printed output and punch tape output as desired.

  2. An evaluation of a computer code based on linear acoustic theory for predicting helicopter main rotor noise. [CH-53A and S-76 helicopters

    NASA Technical Reports Server (NTRS)

    Davis, S. J.; Egolf, T. A.

    1980-01-01

    Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.

  3. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  4. Construction and Utilization of a Beowulf Computing Cluster: A User's Perspective

    NASA Technical Reports Server (NTRS)

    Woods, Judy L.; West, Jeff S.; Sulyma, Peter R.

    2000-01-01

    Lockheed Martin Space Operations - Stennis Programs (LMSO) at the John C Stennis Space Center (NASA/SSC) has designed and built a Beowulf computer cluster which is owned by NASA/SSC and operated by LMSO. The design and construction of the cluster are detailed in this paper. The cluster is currently used for Computational Fluid Dynamics (CFD) simulations. The CFD codes in use and their applications are discussed. Examples of some of the work are also presented. Performance benchmark studies have been conducted for the CFD codes being run on the cluster. The results of two of the studies are presented and discussed. The cluster is not currently being utilized to its full potential; therefore, plans are underway to add more capabilities. These include the addition of structural, thermal, fluid, and acoustic Finite Element Analysis codes as well as real-time data acquisition and processing during test operations at NASA/SSC. These plans are discussed as well.

  5. LTCP 2D Graphical User Interface. Application Description and User's Guide

    NASA Technical Reports Server (NTRS)

    Ball, Robert; Navaz, Homayun K.

    1996-01-01

    A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.

  6. Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)

    2002-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  7. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  8. A Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  9. A rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state-of-the-art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the hydrogen-oxygen coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One-dimensional equilibrium chemistry was employed in the energy release analysis of the combustion chamber and three-dimensional finite-difference analysis of the regenerative cooling channels was used to calculate the pressure drop along the channels and the coolant temperature as it exits the coolant circuit. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  10. Exploring Asynchronous Many-Task Runtime Systems toward Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knight, Samuel; Baker, Gavin Matthew; Gamell, Marc

    2015-10-01

    Major exascale computing reports indicate a number of software challenges to meet the dramatic change of system architectures in near future. While several-orders-of-magnitude increase in parallelism is the most commonly cited of those, hurdles also include performance heterogeneity of compute nodes across the system, increased imbalance between computational capacity and I/O capabilities, frequent system interrupts, and complex hardware architectures. Asynchronous task-parallel programming models show a great promise in addressing these issues, but are not yet fully understood nor developed su ciently for computational science and engineering application codes. We address these knowledge gaps through quantitative and qualitative exploration of leadingmore » candidate solutions in the context of engineering applications at Sandia. In this poster, we evaluate MiniAero code ported to three leading candidate programming models (Charm++, Legion and UINTAH) to examine the feasibility of these models that permits insertion of new programming model elements into an existing code base.« less

  11. Enhancing the ABAQUS thermomechanics code to simulate multipellet steady and transient LWR fuel rod behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. L. Williamson

    A powerful multidimensional fuels performance analysis capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth, gap heat transfer, and gap/plenum gas behavior during irradiation. This new capability is demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multipellet fuel rod, during both steady and transient operation. Comparisons are made between discrete andmore » smeared-pellet simulations. Computational results demonstrate the importance of a multidimensional, multipellet, fully-coupled thermomechanical approach. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermomechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.« less

  12. SPAR improved structure/fluid dynamic analysis capability

    NASA Technical Reports Server (NTRS)

    Oden, J. T.; Pearson, M. L.

    1983-01-01

    The capability of analyzing a coupled dynamic system of flowing fluid and elastic structure was added to the SPAR computer code. A method, developed and adopted for use in SPAR utilizes the existing assumed stress hybrid plan element in SPAR. An operational mode was incorporated in SPAR which provides the capability for analyzing the flaw of a two dimensional, incompressible, viscous fluid within rigid boundaries. Equations were developed to provide for the eventual analysis of the interaction of such fluids with an elastic solid.

  13. Xyce Parallel Electronic Simulator : users' guide, version 2.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont

    2004-06-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for allmore » numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.« less

  14. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  15. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  16. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  17. Duct flow nonuniformities for Space Shuttle Main Engine (SSME)

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Analytical capabilities for modeling hot gas flow on the fuel side of the Space Shuttle Main Engines are developed. Emphasis is placed on construction and documentation of a computational grid code for modeling an elliptical two-duct version of the fuel side hot gas manifold. Computational results for flow past a support strut in an annular channel are also presented.

  18. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix B: ROBSIM programmer's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  19. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, appendix B

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.

    1984-01-01

    The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  20. Computational models for the analysis of three-dimensional internal and exhaust plume flowfields

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Delguidice, P. D.

    1977-01-01

    This paper describes computational procedures developed for the analysis of three-dimensional supersonic ducted flows and multinozzle exhaust plume flowfields. The models/codes embodying these procedures cater to a broad spectrum of geometric situations via the use of multiple reference plane grid networks in several coordinate systems. Shock capturing techniques are employed to trace the propagation and interaction of multiple shock surfaces while the plume interface, separating the exhaust and external flows, and the plume external shock are discretely analyzed. The computational grid within the reference planes follows the trace of streamlines to facilitate the incorporation of finite-rate chemistry and viscous computational capabilities. Exhaust gas properties consist of combustion products in chemical equilibrium. The computational accuracy of the models/codes is assessed via comparisons with exact solutions, results of other codes and experimental data. Results are presented for the flows in two-dimensional convergent and divergent ducts, expansive and compressive corner flows, flow in a rectangular nozzle and the plume flowfields for exhausts issuing out of single and multiple rectangular nozzles.

  1. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  2. Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences

    NASA Astrophysics Data System (ADS)

    Schissel, D. P.

    2004-11-01

    The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.

  3. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  4. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)

    2001-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.

  5. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugfer, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  6. Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    2002-01-01

    For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.

  7. Archiving Software Systems: Approaches to Preserve Computational Capabilities

    NASA Astrophysics Data System (ADS)

    King, T. A.

    2014-12-01

    A great deal of effort is made to preserve scientific data. Not only because data is knowledge, but it is often costly to acquire and is sometimes collected under unique circumstances. Another part of the science enterprise is the development of software to process and analyze the data. Developed software is also a large investment and worthy of preservation. However, the long term preservation of software presents some challenges. Software often requires a specific technology stack to operate. This can include software, operating systems and hardware dependencies. One past approach to preserve computational capabilities is to maintain ancient hardware long past its typical viability. On an archive horizon of 100 years, this is not feasible. Another approach to preserve computational capabilities is to archive source code. While this can preserve details of the implementation and algorithms, it may not be possible to reproduce the technology stack needed to compile and run the resulting applications. This future forward dilemma has a solution. Technology used to create clouds and process big data can also be used to archive and preserve computational capabilities. We explore how basic hardware, virtual machines, containers and appropriate metadata can be used to preserve computational capabilities and to archive functional software systems. In conjunction with data archives, this provides scientist with both the data and capability to reproduce the processing and analysis used to generate past scientific results.

  8. Users manual for updated computer code for axial-flow compressor conceptual design

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.

  9. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  10. Lewis Structures Technology, 1988. Volume 3: Structural Integrity Fatigue and Fracture Wind Turbines HOST

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The charter of the Structures Division is to perform and disseminate results of research conducted in support of aerospace engine structures. These results have a wide range of applicability to practioners of structural engineering mechanics beyond the aerospace arena. The specific purpose of the symposium was to familiarize the engineering structures community with the depth and range of research performed by the division and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive evaluation, constitutive models and experimental capabilities, dynamic systems, fatigue and damage, wind turbines, hot section technology (HOST), aeroelasticity, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics, and structural mechanics computer codes.

  11. Validation of High-Speed Turbulent Boundary Layer and Shock-Boundary Layer Interaction Computations with the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.

    2006-01-01

    The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.

  12. Modeling of the EAST ICRF antenna with ICANT Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin Chengming; Zhao Yanping; Colas, L.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  13. Modeling of the EAST ICRF antenna with ICANT Code

    NASA Astrophysics Data System (ADS)

    Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.

    2007-09-01

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  14. Using Microcomputers to Manage Grants.

    ERIC Educational Resources Information Center

    Joseph, Jonathan L.; And Others

    1982-01-01

    Features of microcomputer systems and software that can be useful in administration of research grants are outlined, including immediacy of reporting, flexibility, accurate balance availability, useful coding, accurate payroll control, and forecasting capabilities. These are contrasted with the less flexible centralized computer operation. (MSE)

  15. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  16. Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields

    NASA Technical Reports Server (NTRS)

    Tannehill, J. C.; Wadawadigi, G.

    1992-01-01

    Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the nonequilibrium air results.

  17. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  18. University of Washington/ Northwest National Marine Renewable Energy Center Tidal Current Technology Test Protocol, Instrumentation, Design Code, and Oceanographic Modeling Collaboration: Cooperative Research and Development Final Report, CRADA Number CRD-11-452

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driscoll, Frederick R.

    The University of Washington (UW) - Northwest National Marine Renewable Energy Center (UW-NNMREC) and the National Renewable Energy Laboratory (NREL) will collaborate to advance research and development (R&D) of Marine Hydrokinetic (MHK) renewable energy technology, specifically renewable energy captured from ocean tidal currents. UW-NNMREC is endeavoring to establish infrastructure, capabilities and tools to support in-water testing of marine energy technology. NREL is leveraging its experience and capabilities in field testing of wind systems to develop protocols and instrumentation to advance field testing of MHK systems. Under this work, UW-NNMREC and NREL will work together to develop a common instrumentation systemmore » and testing methodologies, standards and protocols. UW-NNMREC is also establishing simulation capabilities for MHK turbine and turbine arrays. NREL has extensive experience in wind turbine array modeling and is developing several computer based numerical simulation capabilities for MHK systems. Under this CRADA, UW-NNMREC and NREL will work together to augment single device and array modeling codes. As part of this effort UW NNMREC will also work with NREL to run simulations on NREL's high performance computer system.« less

  19. Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2000-01-01

    An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.

  20. HSR combustion analytical research

    NASA Technical Reports Server (NTRS)

    Nguyen, H. Lee

    1992-01-01

    Increasing the pressure and temperature of the engines of a new generation of supersonic airliners increases the emissions of nitrogen oxides (NO(x)) to a level that would have an adverse impact on the Earth's protective ozone layer. In the process of evolving and implementing low emissions combustor technologies, NASA LeRC has pursued a combustion analysis code program to guide combustor design processes, to identify potential concepts of the greatest promise, and to optimize them at low cost, with short turnaround time. The computational analyses are evaluated at actual engine operating conditions. The approach is to upgrade and apply advanced computer programs for gas turbine applications. Efforts were made in further improving the code capabilities for modeling the physics and the numerical methods of solution. Then test cases and measurements from experiments are used for code validation.

  1. Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.

  2. Task 7: ADPAC User's Manual

    NASA Technical Reports Server (NTRS)

    Hall, E. J.; Topp, D. A.; Delaney, R. A.

    1996-01-01

    The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.

  3. Reacting Multi-Species Gas Capability for USM3D Flow Solver

    NASA Technical Reports Server (NTRS)

    Frink, Neal T.; Schuster, David M.

    2012-01-01

    The USM3D Navier-Stokes flow solver contributed heavily to the NASA Constellation Project (CxP) as a highly productive computational tool for generating the aerodynamic databases for the Ares I and V launch vehicles and Orion launch abort vehicle (LAV). USM3D is currently limited to ideal-gas flows, which are not adequate for modeling the chemistry or temperature effects of hot-gas jet flows. This task was initiated to create an efficient implementation of multi-species gas and equilibrium chemistry into the USM3D code to improve its predictive capabilities for hot jet impingement effects. The goal of this NASA Engineering and Safety Center (NESC) assessment was to implement and validate a simulation capability to handle real-gas effects in the USM3D code. This document contains the outcome of the NESC assessment.

  4. High Performance Object-Oriented Scientific Programming in Fortran 90

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  5. Weather Research and Forecasting Model with Vertical Nesting Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-08-01

    The Weather Research and Forecasting (WRF) model with vertical nesting capability is an extension of the WRF model, which is available in the public domain, from www.wrf-model.org. The new code modifies the nesting procedure, which passes lateral boundary conditions between computational domains in the WRF model. Previously, the same vertical grid was required on all domains, while the new code allows different vertical grids to be used on concurrently run domains. This new functionality improves WRF's ability to produce high-resolution simulations of the atmosphere by allowing a wider range of scales to be efficiently resolved and more accurate lateral boundarymore » conditions to be provided through the nesting procedure.« less

  6. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  7. Self-assembled software and method of overriding software execution

    DOEpatents

    Bouchard, Ann M.; Osbourn, Gordon C.

    2013-01-08

    A computer-implemented software self-assembled system and method for providing an external override and monitoring capability to dynamically self-assembling software containing machines that self-assemble execution sequences and data structures. The method provides an external override machine that can be introduced into a system of self-assembling machines while the machines are executing such that the functionality of the executing software can be changed or paused without stopping the code execution and modifying the existing code. Additionally, a monitoring machine can be introduced without stopping code execution that can monitor specified code execution functions by designated machines and communicate the status to an output device.

  8. Complete analysis of steady and transient missile aerodynamic/propulsive/plume flowfield interactions

    NASA Astrophysics Data System (ADS)

    York, B. J.; Sinha, N.; Dash, S. M.; Hosangadi, A.; Kenzakowski, D. C.; Lee, R. A.

    1992-07-01

    The analysis of steady and transient aerodynamic/propulsive/plume flowfield interactions utilizing several state-of-the-art computer codes (PARCH, CRAFT, and SCHAFT) is discussed. These codes have been extended to include advanced turbulence models, generalized thermochemistry, and multiphase nonequilibrium capabilities. Several specialized versions of these codes have been developed for specific applications. This paper presents a brief overview of these codes followed by selected cases demonstrating steady and transient analyses of conventional as well as advanced missile systems. Areas requiring upgrades include turbulence modeling in a highly compressible environment and the treatment of particulates in general. Recent progress in these areas are highlighted.

  9. User's manual for three dimensional FDTD version B code for scattering from frequency-dependent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file, a discussion of radar cross section computations, a discussion of some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.

  10. Computations of unsteady multistage compressor flows in a workstation environment

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen L.

    1992-01-01

    High-end graphics workstations are becoming a necessary tool in the computational fluid dynamics environment. In addition to their graphic capabilities, workstations of the latest generation have powerful floating-point-operation capabilities. As workstations become common, they could provide valuable computing time for such applications as turbomachinery flow calculations. This report discusses the issues involved in implementing an unsteady, viscous multistage-turbomachinery code (STAGE-2) on workstations. It then describes work in which the workstation version of STAGE-2 was used to study the effects of axial-gap spacing on the time-averaged and unsteady flow within a 2 1/2-stage compressor. The results included time-averaged surface pressures, time-averaged pressure contours, standard deviation of pressure contours, pressure amplitudes, and force polar plots.

  11. An evaluation of three two-dimensional computational fluid dynamics codes including low Reynolds numbers and transonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Hicks, Raymond M.; Cliff, Susan E.

    1991-01-01

    Full-potential, Euler, and Navier-Stokes computational fluid dynamics (CFD) codes were evaluated for use in analyzing the flow field about airfoils sections operating at Mach numbers from 0.20 to 0.60 and Reynolds numbers from 500,000 to 2,000,000. The potential code (LBAUER) includes weakly coupled integral boundary layer equations for laminar and turbulent flow with simple transition and separation models. The Navier-Stokes code (ARC2D) uses the thin-layer formulation of the Reynolds-averaged equations with an algebraic turbulence model. The Euler code (ISES) includes strongly coupled integral boundary layer equations and advanced transition and separation calculations with the capability to model laminar separation bubbles and limited zones of turbulent separation. The best experiment/CFD correlation was obtained with the Euler code because its boundary layer equations model the physics of the flow better than the other two codes. An unusual reversal of boundary layer separation with increasing angle of attack, following initial shock formation on the upper surface of the airfoil, was found in the experiment data. This phenomenon was not predicted by the CFD codes evaluated.

  12. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  13. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  14. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  15. Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer

    NASA Astrophysics Data System (ADS)

    Armstrong, Wilbur C.

    1992-06-01

    The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).

  16. Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer

    NASA Technical Reports Server (NTRS)

    Armstrong, Wilbur C.

    1992-01-01

    The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).

  17. Off-design computer code for calculating the aerodynamic performance of axial-flow fans and compressors

    NASA Technical Reports Server (NTRS)

    Schmidt, James F.

    1995-01-01

    An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.

  18. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  19. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  20. RAMONA-4B a computer code with three-dimensional neutron kinetics for BWR and SBWR system transient - user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.

    This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less

  1. Pre- and postprocessing techniques for determining goodness of computational meshes

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Westermann, T.; Bass, J. M.

    1993-01-01

    Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.

  2. Computer aided system for parametric design of combination die

    NASA Astrophysics Data System (ADS)

    Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.

    2017-09-01

    In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.

  3. Navier-Stokes computations for circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Pulliam, Thomas H.; Jespersen, Dennis C.; Barth, Timothy J.

    1987-01-01

    Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.

  4. Navier-Stokes computations for circulation controlled airfoils

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Jesperen, D. C.; Barth, T. J.

    1986-01-01

    Navier-Stokes computations of subsonic to transonic flow past airfoils with augmented lift due to rearward jet blowing over a curved trailing edge are presented. The approach uses a spiral grid topology. Solutions are obtained using a Navier-Stokes code which employs an implicit finite difference method, an algebraic turbulence model, and developments which improve stability, convergence, and accuracy. Results are compared against experiments for no jet blowing and moderate jet pressures and demonstrate the capability to compute these complicated flows.

  5. Computational Age Dating of Special Nuclear Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2012-06-30

    This slide-show presented an overview of the Constrained Progressive Reversal (CPR) method for computing decays, age dating, and spoof detecting. The CPR method is: Capable of temporal profiling a SNM sample; Precise (compared with known decay code, such a ORIGEN); Easy (for computer implementation and analysis). We have illustrated with real SNM data using CPR for age dating and spoof detection. If SNM is pure, may use CPR to derive its age. If SNM is mixed, CPR will indicate that it is mixed or spoofed.

  6. IMAT (Integrated Multidisciplinary Analysis Tool) user's guide for the VAX/VMS computer

    NASA Technical Reports Server (NTRS)

    Meissner, Frances T. (Editor)

    1988-01-01

    The Integrated Multidisciplinary Analysis Tool (IMAT) is a computer software system for the VAX/VMS computer developed at the Langley Research Center. IMAT provides researchers and analysts with an efficient capability to analyze satellite control systems influenced by structural dynamics. Using a menu-driven executive system, IMAT leads the user through the program options. IMAT links a relational database manager to commercial and in-house structural and controls analysis codes. This paper describes the IMAT software system and how to use it.

  7. Role of IAC in large space systems thermal analysis

    NASA Technical Reports Server (NTRS)

    Jones, G. K.; Skladany, J. T.; Young, J. P.

    1982-01-01

    Computer analysis programs to evaluate critical coupling effects that can significantly influence spacecraft system performance are described. These coupling effects arise from the varied parameters of the spacecraft systems, environments, and forcing functions associated with disciplines such as thermal, structures, and controls. Adverse effects can be expected to significantly impact system design aspects such as structural integrity, controllability, and mission performance. One such needed design analysis capability is a software system that can integrate individual discipline computer codes into a highly user-oriented/interactive-graphics-based analysis capability. The integrated analysis capability (IAC) system can be viewed as: a core framework system which serves as an integrating base whereby users can readily add desired analysis modules and as a self-contained interdisciplinary system analysis capability having a specific set of fully integrated multidisciplinary analysis programs that deal with the coupling of thermal, structures, controls, antenna radiation performance, and instrument optical performance disciplines.

  8. Manned systems utilization analysis (study 2.1). Volume 3: LOVES computer simulations, results, and analyses

    NASA Technical Reports Server (NTRS)

    Stricker, L. T.

    1975-01-01

    The LOVES computer program was employed to analyze the geosynchronous portion of the NASA's 1973 automated satellite mission model from 1980 to 1990. The objectives of the analyses were: (1) to demonstrate the capability of the LOVES code to provide the depth and accuracy of data required to support the analyses; and (2) to tradeoff the concept of space servicing automated satellites composed of replaceable modules against the concept of replacing expendable satellites upon failure. The computer code proved to be an invaluable tool in analyzing the logistic requirements of the various test cases required in the tradeoff. It is indicated that the concept of space servicing offers the potential for substantial savings in the cost of operating automated satellite systems.

  9. Recent developments of the NESSUS probabilistic structural analysis computer program

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.; Torng, T.; Thacker, B.; Riha, D.; Leung, C. P.

    1992-01-01

    The NESSUS probabilistic structural analysis computer program combines state-of-the-art probabilistic algorithms with general purpose structural analysis methods to compute the probabilistic response and the reliability of engineering structures. Uncertainty in loading, material properties, geometry, boundary conditions and initial conditions can be simulated. The structural analysis methods include nonlinear finite element and boundary element methods. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. The scope of the code has recently been expanded to include probabilistic life and fatigue prediction of structures in terms of component and system reliability and risk analysis of structures considering cost of failure. The code is currently being extended to structural reliability considering progressive crack propagation. Several examples are presented to demonstrate the new capabilities.

  10. A CCIR-based prediction model for Earth-Space propagation

    NASA Technical Reports Server (NTRS)

    Zhang, Zengjun; Smith, Ernest K.

    1991-01-01

    At present there is no single 'best way' to predict propagation impairments to an Earth-Space path. However, there is an internationally accepted way, namely that given in the most recent version of CCIR Report 564 of Study Group 5. This paper treats a computer code conforming as far as possible to Report 564. It was prepared for an IBM PS/2 using a 386 chip and for Macintosh SE or Mach II. It is designed to be easy to write and read, easy to modify, fast, have strong graphic capability, contain adequate functions, have dialog capability and windows capability. Computer languages considered included the following: (1) Turbo BASIC, (2) Turbo PASCAL, (3) FORTRAN, (4) SMALL TALK, (5) C++, (6) MS SPREADSHEET, (7) MS Excel-Macro, (8) SIMSCRIPT II.5, and (9) WINGZ.

  11. Application of Interface Technology in Progressive Failure Analysis of Composite Panels

    NASA Technical Reports Server (NTRS)

    Sleight, D. W.; Lotts, C. G.

    2002-01-01

    A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.

  12. Sierra/Solid Mechanics 4.48 User's Guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose

    Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less

  13. Numerical Prediction of Non-Reacting and Reacting Flow in a Model Gas Turbine Combustor

    NASA Technical Reports Server (NTRS)

    Davoudzadeh, Farhad; Liu, Nan-Suey

    2005-01-01

    The three-dimensional, viscous, turbulent, reacting and non-reacting flow characteristics of a model gas turbine combustor operating on air/methane are simulated via an unstructured and massively parallel Reynolds-Averaged Navier-Stokes (RANS) code. This serves to demonstrate the capabilities of the code for design and analysis of real combustor engines. The effects of some design features of combustors are examined. In addition, the computed results are validated against experimental data.

  14. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  15. Shell stability analysis in a computer aided engineering (CAE) environment

    NASA Technical Reports Server (NTRS)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  16. Improved Boundary Layer Module (BLM) for the Solid Performance Program (SPP)

    NASA Astrophysics Data System (ADS)

    Coats, D. E.; Cebeci, T.

    1982-03-01

    The requirements for a replacement to the Bartz boundary layer code, the standard method of computing the performance loss due to viscous effects by the solid performance program, were discussed by the propulsion community along with four nationally recognized boundary layer experts. A consensus was reached regarding the preferred features for the analysis of the replacement code. The major points that were agreed upon are: (1) finite difference methods are preferred over integral methods; (2) a single equation eddy viscosity model was considered to be adequate for the purpose of computing performance loss; (3) a variable grid capability in both coordinate directions would be required; (4) a proven finite difference algorithm which is not stability restricted should be used, that is, an implicit numerical scheme would be required; and (5) the replacement code should be able to compute both turbulent and laminar flows. The program should treat mass addition at the wall as well as being able to calculate a stagnation point starting line.

  17. User's guide for the computer code COLTS for calculating the coupled laminar and turbulent flow over a Jovian entry probe

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graeves, R. A.

    1980-01-01

    A user's guide for a computer code 'COLTS' (Coupled Laminar and Turbulent Solutions) is provided which calculates the laminar and turbulent hypersonic flows with radiation and coupled ablation injection past a Jovian entry probe. Time-dependent viscous-shock-layer equations are used to describe the flow field. These equations are solved by an explicit, two-step, time-asymptotic finite-difference method. Eddy viscosity in the turbulent flow is approximated by a two-layer model. In all, 19 chemical species are used to describe the injection of carbon-phenolic ablator in the hydrogen-helium gas mixture. The equilibrium composition of the mixture is determined by a free-energy minimization technique. A detailed frequency dependence of the absorption coefficient for various species is considered to obtain the radiative flux. The code is written for a CDC-CYBER-203 computer and is capable of providing solutions for ablated probe shapes also.

  18. Cogeneration computer model assessment: Advanced cogeneration research study

    NASA Technical Reports Server (NTRS)

    Rosenberg, L.

    1983-01-01

    Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

  19. A Three-Dimensional Parallel Time-Accurate Turbopump Simulation Procedure Using Overset Grid System

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2002-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and nonuniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability are presented along with the performance of parallel versions of the code.

  20. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  1. Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager

    Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less

  2. Light curves for bump Cepheids computed with a dynamically zoned pulsation code

    NASA Technical Reports Server (NTRS)

    Adams, T. F.; Castor, J. I.; Davis, C. G.

    1980-01-01

    The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.

  3. A New Inversion Routine to Produce Vertical Electron-Density Profiles from Ionospheric Topside-Sounder Data

    NASA Technical Reports Server (NTRS)

    Wang, Yongli; Benson, Robert F.

    2011-01-01

    Two software applications have been produced specifically for the analysis of some million digital topside ionograms produced by a recent analog-to-digital conversion effort of selected analog telemetry tapes from the Alouette-2, ISIS-1 and ISIS-2 satellites. One, TOPIST (TOPside Ionogram Scalar with True-height algorithm) from the University of Massachusetts Lowell, is designed for the automatic identification of the topside-ionogram ionospheric-reflection traces and their inversion into vertical electron-density profiles Ne(h). TOPIST also has the capability of manual intervention. The other application, from the Goddard Space Flight Center based on the FORTRAN code of John E. Jackson from the 1960s, is designed as an IDL-based interactive program for the scaling of selected digital topside-sounder ionograms. The Jackson code has also been modified, with some effort, so as to run on modern computers. This modification was motivated by the need to scale selected ionograms from the millions of Alouette/ISIS topside-sounder ionograms that only exist on 35-mm film. During this modification, it became evident that it would be more efficient to design a new code, based on the capabilities of present-day computers, than to continue to modify the old code. Such a new code has been produced and here we will describe its capabilities and compare Ne(h) profiles produced from it with those produced by the Jackson code. The concept of the new code is to assume an initial Ne(h) and derive a final Ne(h) through an iteration process that makes the resulting apparent-height profile fir the scaled values within a certain error range. The new code can be used on the X-, O-, and Z-mode traces. It does not assume any predefined profile shape between two contiguous points, like the exponential rule used in Jackson s program. Instead, Monotone Piecewise Cubic Interpolation is applied in the global profile to keep the monotone nature of the profile, which also ensures better smoothness in the final profile than in Jackson s program. The new code uses the complete refractive index expression for a cold collisionless plasma and can accommodate the IGRF, T96, and other geomagnetic field models.

  4. Development of Modern Performance Assessment Tools and Capabilities for Underground Disposal of Transuranic Waste at WIPP

    NASA Astrophysics Data System (ADS)

    Zeitler, T.; Kirchner, T. B.; Hammond, G. E.; Park, H.

    2014-12-01

    The Waste Isolation Pilot Plant (WIPP) has been developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. Containment of TRU waste at the WIPP is regulated by the U.S. Environmental Protection Agency (EPA). The DOE demonstrates compliance with the containment requirements by means of performance assessment (PA) calculations. WIPP PA calculations estimate the probability and consequence of potential radionuclide releases from the repository to the accessible environment for a regulatory period of 10,000 years after facility closure. The long-term performance of the repository is assessed using a suite of sophisticated computational codes. In a broad modernization effort, the DOE has overseen the transfer of these codes to modern hardware and software platforms. Additionally, there is a current effort to establish new performance assessment capabilities through the further development of the PFLOTRAN software, a state-of-the-art massively parallel subsurface flow and reactive transport code. Improvements to the current computational environment will result in greater detail in the final models due to the parallelization afforded by the modern code. Parallelization will allow for relatively faster calculations, as well as a move from a two-dimensional calculation grid to a three-dimensional grid. The result of the modernization effort will be a state-of-the-art subsurface flow and transport capability that will serve WIPP PA into the future. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. This research is funded by WIPP programs administered by the Office of Environmental Management (EM) of the U.S Department of Energy.

  5. FORTRAN Automated Code Evaluation System (faces) system documentation, version 2, mod 0. [error detection codes/user manuals (computer programs)

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.

  6. Interactive-graphic flowpath plotting for turbine engines

    NASA Technical Reports Server (NTRS)

    Corban, R. R.

    1981-01-01

    An engine cycle program capable of simulating the design and off-design performance of arbitrary turbine engines, and a computer code which, when used in conjunction with the cycle code, can predict the weight of the engines are described. A graphics subroutine was added to the code to enable the engineer to visualize the designed engine with more clarity by producing an overall view of the designed engine for output on a graphics device using IBM-370 graphics subroutines. In addition, with the engine drawn on a graphics screen, the program allows for the interactive user to make changes to the inputs to the code for the engine to be redrawn and reweighed. These improvements allow better use of the code in conjunction with the engine program.

  7. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  8. User's manual for two dimensional FDTD version TEA and TMA codes for scattering from frequency-independent dielectic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional numerical electromagnetic scattering codes based upon the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current two dimensional FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem set section, a new problem checklist, references and figure titles.

  9. User's manual for two dimensional FDTD version TEA and TMA codes for scattering from frequency-independent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional electromagnetic scattering codes based on the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.

  10. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  11. User's manual for three dimensional FDTD version C code for scattering from frequency-independent dielectric and magnetic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.

  12. User's manual for three dimensional FDTD version D code for scattering from frequency-dependent dielectric and magnetic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version D is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version D code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMOND.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.

  13. User's manual for three dimensional FDTD version A code for scattering from frequency-independent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain (FDTD) Electromagnetic Scattering Code Version A is a three dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain technique. The supplied version of the code is one version of our current three dimensional FDTD code set. The manual provides a description of the code and the corresponding results for the default scattering problem. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version A code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONA.FOR), a section briefly discussing radar cross section (RCS) computations, a section discussing the scattering results, a sample problem setup section, a new problem checklist, references, and figure titles.

  14. User's manual for three dimensional FDTD version C code for scattering from frequency-independent dielectric and magnetic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version C is a three-dimensional numerical electromagnetic scattering code based on the Finite Difference Time Domain (FDTD) technique. The supplied version of the code is one version of our current three-dimensional FDTD code set. The manual given here provides a description of the code and corresponding results for several scattering problems. The manual is organized into 14 sections: introduction, description of the FDTD method, operation, resource requirements, Version C code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONC.FOR), a section briefly discussing radar cross section computations, a section discussing some scattering results, a new problem checklist, references, and figure titles.

  15. User's manual for three dimensional FDTD version B code for scattering from frequency-dependent dielectric materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Version B is a three dimensional numerical electromagnetic scattering code based upon the Finite Difference Time Domain Technique (FDTD). The supplied version of the code is one version of our current three dimensional FDTD code set. This manual provides a description of the code and corresponding results for several scattering problems. The manual is organized into fourteen sections: introduction, description of the FDTD method, operation, resource requirements, Version B code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include file (COMMONB.FOR), a section briefly discussing Radar Cross Section (RCS) computations, a section discussing some scattering results, a sample problem setup section, a new problem checklist, references and figure titles.

  16. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  17. Numerical aerodynamic simulation facility preliminary study: Executive study

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.

  18. Time-Dependent Simulation of Incompressible Flow in a Turbopump Using Overset Grid Approach

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    This paper reports the progress being made towards complete unsteady turbopump simulation capability by using overset grid systems. A computational model of a turbo-pump impeller is used as a test case for the performance evaluation of the MPI, hybrid MPI/Open-MP, and MLP versions of the INS3D code. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Unsteady computations for a turbo-pump, which contains 114 zones with 34.3 Million grid points, are performed on Origin 2000 systems at NASA Ames Research Center. The approach taken for these simulations, and the performance of the parallel versions of the code are presented.

  19. Off-design performance analysis of MHD generator channels

    NASA Technical Reports Server (NTRS)

    Wilson, D. R.; Williams, T. S.

    1980-01-01

    A computer code for performing parametric design point calculations, and evaluating the off-design performance of MHD generators has been developed. The program is capable of analyzing Faraday, Hall, and DCW channels, including the effect of electrical shorting in the gas boundary layers and coal slag layers. Direct integration of the electrode voltage drops is included. The program can be run in either the design or off-design mode. Details of the computer code, together with results of a study of the design and off-design performance of the proposed ETF MHD generator are presented. Design point variations of pre-heat and stoichiometry were analyzed. The off-design study included variations in mass flow rate and oxygen enrichment.

  20. GEOS. User Tutorials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Pengchen; Settgast, Randolph R.; Johnson, Scott M.

    2014-12-17

    GEOS is a massively parallel, multi-physics simulation application utilizing high performance computing (HPC) to address subsurface reservoir stimulation activities with the goal of optimizing current operations and evaluating innovative stimulation methods. GEOS enables coupling of di erent solvers associated with the various physical processes occurring during reservoir stimulation in unique and sophisticated ways, adapted to various geologic settings, materials and stimulation methods. Developed at the Lawrence Livermore National Laboratory (LLNL) as a part of a Laboratory-Directed Research and Development (LDRD) Strategic Initiative (SI) project, GEOS represents the culmination of a multi-year ongoing code development and improvement e ort that hasmore » leveraged existing code capabilities and sta expertise to design new computational geosciences software.« less

  1. Design of hat-stiffened composite panels loaded in axial compression

    NASA Astrophysics Data System (ADS)

    Paul, T. K.; Sinha, P. K.

    An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.

  2. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  3. Automatically generated code for relativistic inhomogeneous cosmologies

    NASA Astrophysics Data System (ADS)

    Bentivegna, Eloisa

    2017-02-01

    The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.

  4. A transonic wind tunnel wall interference prediction code

    NASA Technical Reports Server (NTRS)

    Phillips, Pamela S.; Waggoner, Edgar G.

    1988-01-01

    A small disturbance transonic wall interference prediction code has been developed that is capable of modeling solid, open, perforated, and slotted walls as well as slotted and solid walls with viscous effects. This code was developed by modifying the outer boundary conditions of an existing aerodynamic wing-body-pod-pylon-winglet analysis code. The boundary conditions are presented in the form of equations which simulate the flow at the wall, as well as finite difference approximations to the equations. Comparisons are presented at transonic flow conditions between computational results and experimental data for a wing alone in a solid wall wind tunnel and wing-body configurations in both slotted and solid wind tunnels.

  5. Multidisciplinary analysis of actively controlled large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Cooper, Paul A.; Young, John W.; Sutter, Thomas R.

    1986-01-01

    The control of Flexible Structures (COFS) program has supported the development of an analysis capability at the Langley Research Center called the Integrated Multidisciplinary Analysis Tool (IMAT) which provides an efficient data storage and transfer capability among commercial computer codes to aid in the dynamic analysis of actively controlled structures. IMAT is a system of computer programs which transfers Computer-Aided-Design (CAD) configurations, structural finite element models, material property and stress information, structural and rigid-body dynamic model information, and linear system matrices for control law formulation among various commercial applications programs through a common database. Although general in its formulation, IMAT was developed specifically to aid in the evaluation of the structures. A description of the IMAT system and results of an application of the system are given.

  6. RETRAN analysis of multiple steam generator blow down caused by an auxiliary feedwater steam-line break

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1987-01-01

    Analysis results for multiple steam generator blow down caused by an auxiliary feedwater steam-line break performed with the RETRAN-02 MOD 003 computer code are presented to demonstrate the capabilities of the RETRAN code to predict system transient response for verifying changes in operational procedures and supporting plant equipment modifications. A typical four-loop Westinghouse pressurized water reactor was modeled using best-estimate versus worst case licensing assumptions. This paper presents analyses performed to evaluate the necessity of implementing an auxiliary feedwater steam-line isolation modification. RETRAN transient analysis can be used to determine core cooling capability response, departure from nucleate boiling ratio (DNBR)more » status, and reactor trip signal actuation times.« less

  7. Simulation of cryogenic turbopump annular seals

    NASA Astrophysics Data System (ADS)

    Palazzolo, Alan B.

    1992-12-01

    The goal of the current work is to develop software that can accurately predict the dynamic coefficients, forces, leakage and horsepower loss for annular seals which have a potential for affecting the rotordynamic behavior of the pumps. The fruit of last year's research was the computer code SEALPAL which included capabilities for linear tapered geometry, Moody friction factor and inlet pre-swirl. This code produced results which in most cases compared very well with check cases presented in the literature. TAMUSEAL Icode, which was written to improve SEALPAL by correcting a bug and by adding more accurate integration algorithms and additional capabilities, was then used to predict dynamic coefficients and leakage for the NASA/Pratt and Whitney Alternate Turbopump Development (ATD) LOX Pump's seal.

  8. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs,more » and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.« less

  9. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    NASA Astrophysics Data System (ADS)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  10. Posttest analysis of the 1:6-scale reinforced concrete containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.

    A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less

  11. Global Coordinates and Exact Aberration Calculations Applied to Physical Optics Modeling of Complex Optical Systems

    NASA Astrophysics Data System (ADS)

    Lawrence, G.; Barnard, C.; Viswanathan, V.

    1986-11-01

    Historically, wave optics computer codes have been paraxial in nature. Folded systems could be modeled by "unfolding" the optical system. Calculation of optical aberrations is, in general, left for the analyst to do with off-line codes. While such paraxial codes were adequate for the simpler systems being studied 10 years ago, current problems such as phased arrays, ring resonators, coupled resonators, and grazing incidence optics require a major advance in analytical capability. This paper describes extension of the physical optics codes GLAD and GLAD V to include a global coordinate system and exact ray aberration calculations. The global coordinate system allows components to be positioned and rotated arbitrarily. Exact aberrations are calculated for components in aligned or misaligned configurations by using ray tracing to compute optical path differences and diffraction propagation. Optical path lengths between components and beam rotations in complex mirror systems are calculated accurately so that coherent interactions in phased arrays and coupled devices may be treated correctly.

  12. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  13. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less

  14. Assessment of the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing

    2007-01-01

    The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.

  15. Visual-area coding technique (VACT): optical parallel implementation of fuzzy logic and its visualization with the digital-halftoning process

    NASA Astrophysics Data System (ADS)

    Konishi, Tsuyoshi; Tanida, Jun; Ichioka, Yoshiki

    1995-06-01

    A novel technique, the visual-area coding technique (VACT), for the optical implementation of fuzzy logic with the capability of visualization of the results is presented. This technique is based on the microfont method and is considered to be an instance of digitized analog optical computing. Huge amounts of data can be processed in fuzzy logic with the VACT. In addition, real-time visualization of the processed result can be accomplished.

  16. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  17. 1DTempPro V2: new features for inferring groundwater/surface-water exchange

    USGS Publications Warehouse

    Koch, Franklin W.; Voytek, Emily B.; Day-Lewis, Frederick D.; Healy, Richard W.; Briggs, Martin A.; Lane, John W.; Werkema, Dale D.

    2016-01-01

    A new version of the computer program 1DTempPro extends the original code to include new capabilities for (1) automated parameter estimation, (2) layer heterogeneity, and (3) time-varying specific discharge. The code serves as an interface to the U.S. Geological Survey model VS2DH and supports analysis of vertical one-dimensional temperature profiles under saturated flow conditions to assess groundwater/surface-water exchange and estimate hydraulic conductivity for cases where hydraulic head is known.

  18. Particle bed reactor modeling

    NASA Technical Reports Server (NTRS)

    Sapyta, Joe; Reid, Hank; Walton, Lew

    1993-01-01

    The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.

  19. Method to predict external store carriage characteristics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1988-01-01

    Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.

  20. Numerical Stability and Control Analysis Towards Falling-Leaf Prediction Capabilities of Splitflow for Two Generic High-Performance Aircraft Models

    NASA Technical Reports Server (NTRS)

    Charlton, Eric F.

    1998-01-01

    Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.

  1. The Computer Aided Aircraft-design Package (CAAP)

    NASA Technical Reports Server (NTRS)

    Yalif, Guy U.

    1994-01-01

    The preliminary design of an aircraft is a complex, labor-intensive, and creative process. Since the 1970's, many computer programs have been written to help automate preliminary airplane design. Time and resource analyses have identified, 'a substantial decrease in project duration with the introduction of an automated design capability'. Proof-of-concept studies have been completed which establish 'a foundation for a computer-based airframe design capability', Unfortunately, today's design codes exist in many different languages on many, often expensive, hardware platforms. Through the use of a module-based system architecture, the Computer aided Aircraft-design Package (CAAP) will eventually bring together many of the most useful features of existing programs. Through the use of an expert system, it will add an additional feature that could be described as indispensable to entry level engineers and students: the incorporation of 'expert' knowledge into the automated design process.

  2. INHYD: Computer code for intraply hybrid composite design. A users manual

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1983-01-01

    A computer program (INHYD) was developed for intraply hybrid composite design. A users manual for INHYD is presented. In INHYD embodies several composite micromechanics theories, intraply hybrid composite theories, and an integrated hygrothermomechanical theory. The INHYD can be run in both interactive and batch modes. It has considerable flexibility and capability, which the user can exercise through several options. These options are demonstrated through appropriate INHYD runs in the manual.

  3. Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less

  4. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  5. High Temperature Composite Analyzer (HITCAN) demonstration manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Singhal, S. N; Lackney, J. J.; Murthy, P. L. N.

    1993-01-01

    This manual comprises a variety of demonstration cases for the HITCAN (HIgh Temperature Composite ANalyzer) code. HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. HITCAN is written in FORTRAN 77 computer language and has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. Detailed description of all program variables and terms used in this manual may be found in the User's Manual. The demonstration includes various cases to illustrate the features and analysis capabilities of the HITCAN computer code. These cases include: (1) static analysis, (2) nonlinear quasi-static (incremental) analysis, (3) modal analysis, (4) buckling analysis, (5) fiber degradation effects, (6) fabrication-induced stresses for a variety of structures; namely, beam, plate, ring, shell, and built-up structures. A brief discussion of each demonstration case with the associated input data file is provided. Sample results taken from the actual computer output are also included.

  6. Demonstration of the Application of Composite Load Spectra (CLS) and Probabilistic Structural Analysis (PSAM) Codes to SSME Heat Exchanger Turnaround Vane

    NASA Technical Reports Server (NTRS)

    Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George

    2000-01-01

    This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.

  7. Improvements to the fastex flutter analysis computer code

    NASA Technical Reports Server (NTRS)

    Taylor, Ronald F.

    1987-01-01

    Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.

  8. Development of a Aerothermoelastic-Acoustics Simulation Capability of Flight Vehicles

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.; Choi, S. B.; Ibrahim, A.

    2010-01-01

    A novel numerical, finite element based analysis methodology is presented in this paper suitable for accurate and efficient simulation of practical, complex flight vehicles. An associated computer code, developed in this connection, is also described in some detail. Thermal effects of high speed flow obtained from a heat conduction analysis are incorporated in the modal analysis which in turn affects the unsteady flow arising out of interaction of elastic structures with the air. Numerical examples pertaining to representative problems are given in much detail testifying to the efficacy of the advocated techniques. This is a unique implementation of temperature effects in a finite element CFD based multidisciplinary simulation analysis capability involving large scale computations.

  9. Computer Simulation Performed for Columbia Project Cooling System

    NASA Technical Reports Server (NTRS)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  10. Molecular Dynamics of Hot Dense Plasmas: New Horizons

    NASA Astrophysics Data System (ADS)

    Graziani, Frank

    2011-10-01

    We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. We describe the status of a new time-dependent simulation capability for hot dense plasmas. The backbone of this multi-institutional computational and experimental effort--the Cimarron Project--is the massively parallel molecular dynamics (MD) code ``ddcMD''. The project's focus is material conditions such as exist in inertial confinement fusion experiments, and in many stellar interiors: high temperatures, high densities, significant electromagnetic fields, mixtures of high- and low- Zelements, and non-Maxwellian particle distributions. Of particular importance is our ability to incorporate into this classical MD code key atomic, radiative, and nuclear processes, so that their interacting effects under non-ideal plasma conditions can be investigated. This talk summarizes progress in computational methodology, discusses strengths and weaknesses of quantum statistical potentials as effective interactions for MD, explains the model used for quantum events possibly occurring in a collision and highlights some significant results obtained to date. This work is performed under the auspices of the U. S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.

  12. Computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raboin, P J

    1998-01-01

    The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less

  13. Computer Tensor Codes to Design the War Drive

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.

  14. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  15. User's guide for NASCRIN: A vectorized code for calculating two-dimensional supersonic internal flow fields

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1984-01-01

    A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.

  16. A Study of Neutron Leakage in Finite Objects

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.

  17. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  18. Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations

    NASA Astrophysics Data System (ADS)

    Tritsis, A.; Yorke, H.; Tassis, K.

    2018-05-01

    We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.

  19. Liquid rocket combustion computer model with distributed energy release. DER computer program documentation and user's guide, volume 1

    NASA Technical Reports Server (NTRS)

    Combs, L. P.

    1974-01-01

    A computer program for analyzing rocket engine performance was developed. The program is concerned with the formation, distribution, flow, and combustion of liquid sprays and combustion product gases in conventional rocket combustion chambers. The capabilities of the program to determine the combustion characteristics of the rocket engine are described. Sample data code sheets show the correct sequence and formats for variable values and include notes concerning options to bypass the input of certain data. A seperate list defines the variables and indicates their required dimensions.

  20. A language comparison for scientific computing on MIMD architectures

    NASA Technical Reports Server (NTRS)

    Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.

    1989-01-01

    Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.

  1. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focussed on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for increased understanding of the physical processes governing ice accretion, ice shedding, and iced airfoil aerodynamics is examined.

  2. Unsteady thermal blooming of intense laser beams

    NASA Astrophysics Data System (ADS)

    Ulrich, J. T.; Ulrich, P. B.

    1980-01-01

    A four dimensional (three space plus time) computer program has been written to compute the nonlinear heating of a gas by an intense laser beam. Unsteady, transient cases are capable of solution and no assumption of a steady state need be made. The transient results are shown to asymptotically approach the steady-state results calculated by the standard three dimensional thermal blooming computer codes. The report discusses the physics of the laser-absorber interaction, the numerical approximation used, and comparisons with experimental data. A flowchart is supplied in the appendix to the report.

  3. Icing simulation: A survey of computer models and experimental facilities

    NASA Technical Reports Server (NTRS)

    Potapczuk, M. G.; Reinmann, J. J.

    1991-01-01

    A survey of the current methods for simulation of the response of an aircraft or aircraft subsystem to an icing encounter is presented. The topics discussed include a computer code modeling of aircraft icing and performance degradation, an evaluation of experimental facility simulation capabilities, and ice protection system evaluation tests in simulated icing conditions. Current research focused on upgrading simulation fidelity of both experimental and computational methods is discussed. The need for the increased understanding of the physical processes governing ice accretion, ice shedding, and iced aerodynamics is examined.

  4. Analysis of film cooling in rocket nozzles

    NASA Technical Reports Server (NTRS)

    Woodbury, Keith A.

    1992-01-01

    Computational Fluid Dynamics (CFD) programs are customarily used to compute details of a flow field, such as velocity fields or species concentrations. Generally they are not used to determine the resulting conditions at a solid boundary such as wall shear stress or heat flux. However, determination of this information should be within the capability of a CFD code, as the code supposedly contains appropriate models for these wall conditions. Before such predictions from CFD analyses can be accepted, the credibility of the CFD codes upon which they are based must be established. This report details the progress made in constructing a CFD model to predict the heat transfer to the wall in a film cooled rocket nozzle. Specifically, the objective of this work is to use the NASA code FDNS to predict the heat transfer which will occur during the upcoming hot-firing of the Pratt & Whitney 40K subscale nozzle (1Q93). Toward this end, an M = 3 wall jet is considered, and the resulting heat transfer to the wall is computed. The values are compared against experimental data available in Reference 1. Also, FDNS's ability to compute heat flux in a reacting flow will be determined by comparing the code's predictions against calorimeter data from the hot firing of a 40K combustor. The process of modeling the flow of combusting gases through the Pratt & Whitney 40K subscale combustor and nozzle is outlined. What follows in this report is a brief description of the FDNS code, with special emphasis on how it handles solid wall boundary conditions. The test cases and some FDNS solution are presented next, along with comparison to experimental data. The process of modeling the flow through a chamber and a nozzle using the FDNS code will also be outlined.

  5. PlasmaPy: initial development of a Python package for plasma physics

    NASA Astrophysics Data System (ADS)

    Murphy, Nicholas; Leonard, Andrew J.; Stańczak, Dominik; Haggerty, Colby C.; Parashar, Tulasi N.; Huang, Yu-Min; PlasmaPy Community

    2017-10-01

    We report on initial development of PlasmaPy: an open source community-driven Python package for plasma physics. PlasmaPy seeks to provide core functionality that is needed for the formation of a fully open source Python ecosystem for plasma physics. PlasmaPy prioritizes code readability, consistency, and maintainability while using best practices for scientific computing such as version control, continuous integration testing, embedding documentation in code, and code review. We discuss our current and planned capabilities, including features presently under development. The development roadmap includes features such as fluid and particle simulation capabilities, a Grad-Shafranov solver, a dispersion relation solver, atomic data retrieval methods, and tools to analyze simulations and experiments. We describe several ways to contribute to PlasmaPy. PlasmaPy has a code of conduct and is being developed under a BSD license, with a version 0.1 release planned for 2018. The success of PlasmaPy depends on active community involvement, so anyone interested in contributing to this project should contact the authors. This work was partially supported by the U.S. Department of Energy.

  6. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  7. A Three Dimensional Parallel Time Accurate Turbopump Simulation Procedure Using Overset Grid Systems

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Chan, William; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort provides developers with information such as transient flow phenomena at start up, and non-uniform inflows, and will eventually impact on system vibration and structures. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. CAD to solution auto-scripting capability is being developed for turbopump applications. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 3000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  8. New decoding methods of interleaved burst error-correcting codes

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  9. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    NASA Astrophysics Data System (ADS)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.

  10. Three-Dimensional Nacelle Aeroacoustics Code With Application to Impedance Education

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.

    2000-01-01

    A three-dimensional nacelle acoustics code that accounts for uniform mean flow and variable surface impedance liners is developed. The code is linked to a commercial version of the NASA-developed General Purpose Solver (for solution of linear systems of equations) in order to obtain the capability to study high frequency waves that may require millions of grid points for resolution. Detailed, single-processor statistics for the performance of the solver in rigid and soft-wall ducts are presented. Over the range of frequencies of current interest in nacelle liner research, noise attenuation levels predicted from the code were in excellent agreement with those predicted from mode theory. The equation solver is memory efficient, requiring only a small fraction of the memory available on modern computers. As an application, the code is combined with an optimization algorithm and used to reduce the impedance spectrum of a ceramic liner. The primary problem with using the code to perform optimization studies at frequencies above I1kHz is the excessive CPU time (a major portion of which is matrix assembly). The research recommends that research be directed toward development of a rapid sparse assembler and exploitation of the multiprocessor capability of the solver to further reduce CPU time.

  11. Application of CFX-10 to the Investigation of RPV Coolant Mixing in VVER Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moretti, Fabio; Melideo, Daniele; Terzuoli, Fulvio

    2006-07-01

    Coolant mixing phenomena occurring in the pressure vessel of a nuclear reactor constitute one of the main objectives of investigation by researchers concerned with nuclear reactor safety. For instance, mixing plays a relevant role in reactivity-induced accidents initiated by de-boration or boron dilution events, followed by transport of a de-borated slug into the vessel of a pressurized water reactor. Another example is constituted by temperature mixing, which may sensitively affect the consequences of a pressurized thermal shock scenario. Predictive analysis of mixing phenomena is strongly improved by the availability of computational tools able to cope with the inherent three-dimensionality ofmore » such problem, like system codes with three-dimensional capabilities, and Computational Fluid Dynamics (CFD) codes. The present paper deals with numerical analyses of coolant mixing in the reactor pressure vessel of a VVER-1000 reactor, performed by the ANSYS CFX-10 CFD code. In particular, the 'swirl' effect that has been observed to take place in the downcomer of such kind of reactor has been addressed, with the aim of assessing the capability of the codes to predict that effect, and to understand the reasons for its occurrence. Results have been compared against experimental data from V1000CT-2 Benchmark. Moreover, a boron mixing problem has been investigated, in the hypothesis that a de-borated slug, transported by natural circulation, enters the vessel. Sensitivity analyses have been conducted on some geometrical features, model parameters and boundary conditions. (authors)« less

  12. The r-Java 2.0 code: nuclear physics

    NASA Astrophysics Data System (ADS)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  13. Human perceptual deficits as factors in computer interface test and evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowser, S.E.

    1992-06-01

    Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The testmore » and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.« less

  14. Computational and Experimental Study of Supersonic Nozzle Flow and Shock Interactions

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Elmiligui, Alaa A.; Nayani, Sudheer N.; Castner, Ray; Bruce, Walter E., IV; Inskeep, Jacob

    2015-01-01

    This study focused on the capability of NASA Tetrahedral Unstructured Software System's CFD code USM3D capability to predict the interaction between a shock and supersonic plume flow. Previous studies, published in 2004, 2009 and 2013, investigated USM3D's supersonic plume flow results versus historical experimental data. This current study builds on that research by utilizing the best practices from the early papers for properly capturing the plume flow and then adding a wedge acting as a shock generator. This computational study is in conjunction with experimental tests conducted at the Glenn Research Center 1'x1' Supersonic Wind Tunnel. The comparison of the computational and experimental data shows good agreement for location and strength of the shocks although there are vertical shifts between the data sets that may be do to the measurement technique.

  15. Comprehensive silicon solar cell computer modeling

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1984-01-01

    The development of an efficient, comprehensive Si solar cell modeling program that has the capability of simulation accuracy of 5 percent or less is examined. A general investigation of computerized simulation is provided. Computer simulation programs are subdivided into a number of major tasks: (1) analytical method used to represent the physical system; (2) phenomena submodels that comprise the simulation of the system; (3) coding of the analysis and the phenomena submodels; (4) coding scheme that results in efficient use of the CPU so that CPU costs are low; and (5) modularized simulation program with respect to structures that may be analyzed, addition and/or modification of phenomena submodels as new experimental data become available, and the addition of other photovoltaic materials.

  16. Advanced capabilities for materials modelling with Quantum ESPRESSO

    NASA Astrophysics Data System (ADS)

    Giannozzi, P.; Andreussi, O.; Brumme, T.; Bunau, O.; Buongiorno Nardelli, M.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Cococcioni, M.; Colonna, N.; Carnimeo, I.; Dal Corso, A.; de Gironcoli, S.; Delugas, P.; DiStasio, R. A., Jr.; Ferretti, A.; Floris, A.; Fratesi, G.; Fugallo, G.; Gebauer, R.; Gerstmann, U.; Giustino, F.; Gorni, T.; Jia, J.; Kawamura, M.; Ko, H.-Y.; Kokalj, A.; Küçükbenli, E.; Lazzeri, M.; Marsili, M.; Marzari, N.; Mauri, F.; Nguyen, N. L.; Nguyen, H.-V.; Otero-de-la-Roza, A.; Paulatto, L.; Poncé, S.; Rocca, D.; Sabatini, R.; Santra, B.; Schlipf, M.; Seitsonen, A. P.; Smogunov, A.; Timrov, I.; Thonhauser, T.; Umari, P.; Vast, N.; Wu, X.; Baroni, S.

    2017-11-01

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  17. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Giannozzi, P; Andreussi, O; Brumme, T; Bunau, O; Buongiorno Nardelli, M; Calandra, M; Car, R; Cavazzoni, C; Ceresoli, D; Cococcioni, M; Colonna, N; Carnimeo, I; Dal Corso, A; de Gironcoli, S; Delugas, P; DiStasio, R A; Ferretti, A; Floris, A; Fratesi, G; Fugallo, G; Gebauer, R; Gerstmann, U; Giustino, F; Gorni, T; Jia, J; Kawamura, M; Ko, H-Y; Kokalj, A; Küçükbenli, E; Lazzeri, M; Marsili, M; Marzari, N; Mauri, F; Nguyen, N L; Nguyen, H-V; Otero-de-la-Roza, A; Paulatto, L; Poncé, S; Rocca, D; Sabatini, R; Santra, B; Schlipf, M; Seitsonen, A P; Smogunov, A; Timrov, I; Thonhauser, T; Umari, P; Vast, N; Wu, X; Baroni, S

    2017-10-24

    Quantum EXPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches. Quantum EXPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.

  18. Advanced capabilities for materials modelling with Quantum ESPRESSO.

    PubMed

    Andreussi, Oliviero; Brumme, Thomas; Bunau, Oana; Buongiorno Nardelli, Marco; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Cococcioni, Matteo; Colonna, Nicola; Carnimeo, Ivan; Dal Corso, Andrea; de Gironcoli, Stefano; Delugas, Pietro; DiStasio, Robert; Ferretti, Andrea; Floris, Andrea; Fratesi, Guido; Fugallo, Giorgia; Gebauer, Ralph; Gerstmann, Uwe; Giustino, Feliciano; Gorni, Tommaso; Jia, Junteng; Kawamura, Mitsuaki; Ko, Hsin-Yu; Kokalj, Anton; Küçükbenli, Emine; Lazzeri, Michele; Marsili, Margherita; Marzari, Nicola; Mauri, Francesco; Nguyen, Ngoc Linh; Nguyen, Huy-Viet; Otero-de-la-Roza, Alberto; Paulatto, Lorenzo; Poncé, Samuel; Giannozzi, Paolo; Rocca, Dario; Sabatini, Riccardo; Santra, Biswajit; Schlipf, Martin; Seitsonen, Ari Paavo; Smogunov, Alexander; Timrov, Iurii; Thonhauser, Timo; Umari, Paolo; Vast, Nathalie; Wu, Xifan; Baroni, Stefano

    2017-09-27

    Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software. © 2017 IOP Publishing Ltd.

  19. The Italian experience on T/H best estimate codes: Achievements and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alemberti, A.; D`Auria, F.; Fiorino, E.

    1997-07-01

    Themalhydraulic system codes are complex tools developed to simulate the power plants behavior during off-normal conditions. Among the objectives of the code calculations the evaluation of safety margins, the operator training, the optimization of the plant design and of the emergency operating procedures, are mostly considered in the field of the nuclear safety. The first generation of codes was developed in the United States at the end of `60s. Since that time, different research groups all over the world started the development of their own codes. At the beginning of the `80s, the second generation codes were proposed; these differmore » from the first generation codes owing to the number of balance equations solved (six instead of three), the sophistication of the constitutive models and of the adopted numerics. The capabilities of available computers have been fully exploited during the years. The authors then summarize some of the major steps in the process of developing, modifying, and advancing the capabilities of the codes. They touch on the fact that Italian, and for that matter non-American, researchers have not been intimately involved in much of this work. They then describe the application of these codes in Italy, even though there are no operating or under construction nuclear power plants at this time. Much of this effort is directed at the general question of plant safety in the face of transient type events.« less

  20. Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob

    2003-01-01

    The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.

  1. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  2. Ice Engineering - study of Related Properties of Floating Sea-Ice Sheets and Summary of Elastic and Viscoelastic Analyses

    DTIC Science & Technology

    1977-12-01

    Ice Plate Example. To demonstrate the capability of the visco- elastic finite-element computer code (5), the structural response of an infinite ... sea -ice plate on a fluid foundation is investigated for a simulated aircraft loading condition and, using relaxation functions, is determined

  3. Design and Analysis of Turbomachinery for Space Applications

    NASA Technical Reports Server (NTRS)

    Dorney, D.; Garcia, Roberto (Technical Monitor)

    2002-01-01

    This presentation provides an overview of CORSAIR, a three dimensional computational fluid dynamics software code for the analysis of turbomachinery components available from NASA, and discusses its potential use in the design of these parts. Topics covered include: time-dependent equations of motion, grid topology, turbulence models, boundary conditions, parallel simulations and miscellaneous capabilities.

  4. Modeling high-temperature superconductors and metallic alloys on the Intel IPSC/860

    NASA Astrophysics Data System (ADS)

    Geist, G. A.; Peyton, B. W.; Shelton, W. A.; Stocks, G. M.

    Oak Ridge National Laboratory has embarked on several computational Grand Challenges, which require the close cooperation of physicists, mathematicians, and computer scientists. One of these projects is the determination of the material properties of alloys from first principles and, in particular, the electronic structure of high-temperature superconductors. While the present focus of the project is on superconductivity, the approach is general enough to permit study of other properties of metallic alloys such as strength and magnetic properties. This paper describes the progress to date on this project. We include a description of a self-consistent KKR-CPA method, parallelization of the model, and the incorporation of a dynamic load balancing scheme into the algorithm. We also describe the development and performance of a consolidated KKR-CPA code capable of running on CRAYs, workstations, and several parallel computers without source code modification. Performance of this code on the Intel iPSC/860 is also compared to a CRAY 2, CRAY YMP, and several workstations. Finally, some density of state calculations of two perovskite superconductors are given.

  5. An installed nacelle design code using a multiblock Euler solver. Volume 1: Theory document

    NASA Technical Reports Server (NTRS)

    Chen, H. C.

    1992-01-01

    An efficient multiblock Euler design code was developed for designing a nacelle installed on geometrically complex airplane configurations. This approach employed a design driver based on a direct iterative surface curvature method developed at LaRC. A general multiblock Euler flow solver was used for computing flow around complex geometries. The flow solver used a finite-volume formulation with explicit time-stepping to solve the Euler Equations. It used a multiblock version of the multigrid method to accelerate the convergence of the calculations. The design driver successively updated the surface geometry to reduce the difference between the computed and target pressure distributions. In the flow solver, the change in surface geometry was simulated by applying surface transpiration boundary conditions to avoid repeated grid generation during design iterations. Smoothness of the designed surface was ensured by alternate application of streamwise and circumferential smoothings. The capability and efficiency of the code was demonstrated through the design of both an isolated nacelle and an installed nacelle at various flow conditions. Information on the execution of the computer program is provided in volume 2.

  6. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  7. Validation of hydrogen gas stratification and mixing models

    DOE PAGES

    Wu, Hsingtzu; Zhao, Haihua

    2015-05-26

    Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less

  8. Electro-Thermal-Mechanical Simulation Capability Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D

    This is the Final Report for LDRD 04-ERD-086, 'Electro-Thermal-Mechanical Simulation Capability'. The accomplishments are well documented in five peer-reviewed publications and six conference presentations and hence will not be detailed here. The purpose of this LDRD was to research and develop numerical algorithms for three-dimensional (3D) Electro-Thermal-Mechanical simulations. LLNL has long been a world leader in the area of computational mechanics, and recently several mechanics codes have become 'multiphysics' codes with the addition of fluid dynamics, heat transfer, and chemistry. However, these multiphysics codes do not incorporate the electromagnetics that is required for a coupled Electro-Thermal-Mechanical (ETM) simulation. There aremore » numerous applications for an ETM simulation capability, such as explosively-driven magnetic flux compressors, electromagnetic launchers, inductive heating and mixing of metals, and MEMS. A robust ETM simulation capability will enable LLNL physicists and engineers to better support current DOE programs, and will prepare LLNL for some very exciting long-term DoD opportunities. We define a coupled Electro-Thermal-Mechanical (ETM) simulation as a simulation that solves, in a self-consistent manner, the equations of electromagnetics (primarily statics and diffusion), heat transfer (primarily conduction), and non-linear mechanics (elastic-plastic deformation, and contact with friction). There is no existing parallel 3D code for simulating ETM systems at LLNL or elsewhere. While there are numerous magnetohydrodynamic codes, these codes are designed for astrophysics, magnetic fusion energy, laser-plasma interaction, etc. and do not attempt to accurately model electromagnetically driven solid mechanics. This project responds to the Engineering R&D Focus Areas of Simulation and Energy Manipulation, and addresses the specific problem of Electro-Thermal-Mechanical simulation for design and analysis of energy manipulation systems such as magnetic flux compression generators and railguns. This project compliments ongoing DNT projects that have an experimental emphasis. Our research efforts have been encapsulated in the Diablo and ALE3D simulation codes. This new ETM capability already has both internal and external users, and has spawned additional research in plasma railgun technology. By developing this capability Engineering has become a world-leader in ETM design, analysis, and simulation. This research has positioned LLNL to be able to compete for new business opportunities with the DoD in the area of railgun design. We currently have a three-year $1.5M project with the Office of Naval Research to apply our ETM simulation capability to railgun bore life issues and we expect to be a key player in the railgun community.« less

  9. Application of CHAD hydrodynamics to shock-wave problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.

    1997-12-31

    CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, itmore » is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.« less

  10. Unsteady Full Annulus Simulations of a Transonic Axial Compressor Stage

    NASA Technical Reports Server (NTRS)

    Herrick, Gregory P.; Hathaway, Michael D.; Chen, Jen-Ping

    2009-01-01

    Two recent research endeavors in turbomachinery at NASA Glenn Research Center have focused on compression system stall inception and compression system aerothermodynamic performance. Physical experiment and computational research are ongoing in support of these research objectives. TURBO, an unsteady, three-dimensional, Navier-Stokes computational fluid dynamics code commissioned and developed by NASA, has been utilized, enhanced, and validated in support of these endeavors. In the research which follows, TURBO is shown to accurately capture compression system flow range-from choke to stall inception-and also to accurately calculate fundamental aerothermodynamic performance parameters. Rigorous full-annulus calculations are performed to validate TURBO s ability to simulate the unstable, unsteady, chaotic stall inception process; as part of these efforts, full-annulus calculations are also performed at a condition approaching choke to further document TURBO s capabilities to compute aerothermodynamic performance data and support a NASA code assessment effort.

  11. Probabilistic Structural Analysis Theory Development

    NASA Technical Reports Server (NTRS)

    Burnside, O. H.

    1985-01-01

    The objective of the Probabilistic Structural Analysis Methods (PSAM) project is to develop analysis techniques and computer programs for predicting the probabilistic response of critical structural components for current and future space propulsion systems. This technology will play a central role in establishing system performance and durability. The first year's technical activity is concentrating on probabilistic finite element formulation strategy and code development. Work is also in progress to survey critical materials and space shuttle mian engine components. The probabilistic finite element computer program NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) is being developed. The final probabilistic code will have, in the general case, the capability of performing nonlinear dynamic of stochastic structures. It is the goal of the approximate methods effort to increase problem solving efficiency relative to finite element methods by using energy methods to generate trial solutions which satisfy the structural boundary conditions. These approximate methods will be less computer intensive relative to the finite element approach.

  12. Computational Support for Technology- Investment Decisions

    NASA Technical Reports Server (NTRS)

    Adumitroaie, Virgil; Hua, Hook; Lincoln, William; Block, Gary; Mrozinski, Joseph; Shelton, Kacie; Weisbin, Charles; Elfes, Alberto; Smith, Jeffrey

    2007-01-01

    Strategic Assessment of Risk and Technology (START) is a user-friendly computer program that assists human managers in making decisions regarding research-and-development investment portfolios in the presence of uncertainties and of non-technological constraints that include budgetary and time limits, restrictions related to infrastructure, and programmatic and institutional priorities. START facilitates quantitative analysis of technologies, capabilities, missions, scenarios and programs, and thereby enables the selection and scheduling of value-optimal development efforts. START incorporates features that, variously, perform or support a unique combination of functions, most of which are not systematically performed or supported by prior decision- support software. These functions include the following: Optimal portfolio selection using an expected-utility-based assessment of capabilities and technologies; Temporal investment recommendations; Distinctions between enhancing and enabling capabilities; Analysis of partial funding for enhancing capabilities; and Sensitivity and uncertainty analysis. START can run on almost any computing hardware, within Linux and related operating systems that include Mac OS X versions 10.3 and later, and can run in Windows under the Cygwin environment. START can be distributed in binary code form. START calls, as external libraries, several open-source software packages. Output is in Excel (.xls) file format.

  13. Reed Solomon codes for error control in byte organized computer memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation are presented. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  14. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  15. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  16. Global MHD simulation of magnetosphere using HPF

    NASA Astrophysics Data System (ADS)

    Ogino, T.

    We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5% using 56 PEs of Fujitsu VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.

  17. Towards industrial-strength Navier-Stokes codes

    NASA Technical Reports Server (NTRS)

    Jou, Wen-Huei; Wigton, Laurence B.; Allmaras, Steven R.

    1992-01-01

    In this paper we discuss our experiences with Navier-Stokes (NS) codes using central differencing (CD) and scalar artificial dissipation (SAD). The NS-CDSAD codes have been developed by several researchers. Our results confirm that for typical commercial transport wing and wing/body configurations flying at transonic conditions with all turbulent boundary layers, NS-CDSAD codes, when used with the Johnson-King turbulence model, are capable of computing pressure distributions in excellent agreement with experimental data. However, results are not as good when laminar boundary layers are present. Exhaustive 2-D grid refinement studies supported by detailed analysis suggest that the numerical errors associated with SAD severely contaminate the solution in the laminar portion of the boundary layer. It is left as a challenge to the CFD community to find and fix the problems with Navier-Stokes codes and to produce a NS code which converges reliably and properly captures the laminar portion of the boundary layer on a reasonable grid.

  18. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines

    USGS Publications Warehouse

    Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.

    1988-01-01

    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  19. Assessment of Near-Field Sonic Boom Simulation Tools

    NASA Technical Reports Server (NTRS)

    Casper, J. H.; Cliff, S. E.; Thomas, S. D.; Park, M. A.; McMullen, M. S.; Melton, J. E.; Durston, D. A.

    2008-01-01

    A recent study for the Supersonics Project, within the National Aeronautics and Space Administration, has been conducted to assess current in-house capabilities for the prediction of near-field sonic boom. Such capabilities are required to simulate the highly nonlinear flow near an aircraft, wherein a sonic-boom signature is generated. There are many available computational fluid dynamics codes that could be used to provide the near-field flow for a sonic boom calculation. However, such codes have typically been developed for applications involving aerodynamic configuration, for which an efficiently generated computational mesh is usually not optimum for a sonic boom prediction. Preliminary guidelines are suggested to characterize a state-of-the-art sonic boom prediction methodology. The available simulation tools that are best suited to incorporate into that methodology are identified; preliminary test cases are presented in support of the selection. During this phase of process definition and tool selection, parallel research was conducted in an attempt to establish criteria that link the properties of a computational mesh to the accuracy of a sonic boom prediction. Such properties include sufficient grid density near shocks and within the zone of influence, which are achieved by adaptation and mesh refinement strategies. Prediction accuracy is validated by comparison with wind tunnel data.

  20. OpenACC acceleration of an unstructured CFD solver based on a reconstructed discontinuous Galerkin method for compressible flows

    DOE PAGES

    Xia, Yidong; Lou, Jialin; Luo, Hong; ...

    2015-02-09

    Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less

  1. CRAC2 model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

  2. Validation of light water reactor calculation methods and JEF-1-based data libraries by TRX and BAPL critical experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paratte, J.M.; Pelloni, S.; Grimm, P.

    1991-04-01

    This paper analyzes the capability of various code systems and JEF-1-based nuclear data libraries to compute light water reactor lattices by comparing calculations with results from thermal reactor benchmark experiments TRX and BAPL and with previously published values. With the JEF-1 evaluation, eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and all methods give reasonable results for the measured reaction rate ratios within, or not too far from, the experimental uncertainty.

  3. Recent advances in hypersonic technology

    NASA Technical Reports Server (NTRS)

    Dwoyer, Douglas L.

    1990-01-01

    This paper will focus on recent advances in hypersonic aerodynamic prediction techniques. Current capabilities of existing numerical methods for predicting high Mach number flows will be discussed and shortcomings will be identified. Physical models available for inclusion into modern codes for predicting the effects of transition and turbulence will also be outlined and their limitations identified. Chemical reaction models appropriate to high-speed flows will be addressed, and the impact of their inclusion in computational fluid dynamics codes will be discussed. Finally, the problem of validating predictive techniques for high Mach number flows will be addressed.

  4. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  5. TRAC-P1: an advanced best estimate computer program for PWR LOCA analysis. I. Methods, models, user information, and programming details

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1978-05-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions.more » The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations.« less

  6. Unit cell geometry of multiaxial preforms for structural composites

    NASA Technical Reports Server (NTRS)

    Ko, Frank; Lei, Charles; Rahman, Anisur; Du, G. W.; Cai, Yun-Jia

    1993-01-01

    The objective of this study is to investigate the yarn geometry of multiaxial preforms. The importance of multiaxial preforms for structural composites is well recognized by the industry but, to exploit their full potential, engineering design rules must be established. This study is a step in that direction. In this work the preform geometry for knitted and braided preforms was studied by making a range of well designed samples and studying them by photo microscopy. The structural geometry of the preforms is related to the processing parameters. Based on solid modeling and B-spline methodology a software package is developed. This computer code enables real time structural representations of complex fiber architecture based on the rule of preform manufacturing. The code has the capability of zooming and section plotting. These capabilities provide a powerful means to study the effect of processing variables on the preform geometry. the code also can be extended to an auto mesh generator for downstream structural analysis using finite element method. This report is organized into six sections. In the first section the scope and background of this work is elaborated. In section two the unit cell geometries of braided and multi-axial warp knitted preforms is discussed. The theoretical frame work of yarn path modeling and solid modeling is presented in section three. The thin section microscopy carried out to observe the structural geometry of the preforms is the subject in section four. The structural geometry is related to the processing parameters in section five. Section six documents the implementation of the modeling techniques into the computer code MP-CAD. A user manual for the software is also presented here. The source codes and published papers are listed in the Appendices.

  7. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less

  8. Distributed multitasking ITS with PVM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, W.C.; Halbleib, J.A. Sr.

    1995-12-31

    Advances in computer hardware and communication software have made it possible to perform parallel-processing computing on a collection of desktop workstations. For many applications, multitasking on a cluster of high-performance workstations has achieved performance comparable to or better than that on a traditional supercomputer. From the point of view of cost-effectiveness, it also allows users to exploit available but unused computational resources and thus achieve a higher performance-to-cost ratio. Monte Carlo calculations are inherently parallelizable because the individual particle trajectories can be generated independently with minimum need for interprocessor communication. Furthermore, the number of particle histories that can be generatedmore » in a given amount of wall-clock time is nearly proportional to the number of processors in the cluster. This is an important fact because the inherent statistical uncertainty in any Monte Carlo result decreases as the number of histories increases. For these reasons, researchers have expended considerable effort to take advantage of different parallel architectures for a variety of Monte Carlo radiation transport codes, often with excellent results. The initial interest in this work was sparked by the multitasking capability of the MCNP code on a cluster of workstations using the Parallel Virtual Machine (PVM) software. On a 16-machine IBM RS/6000 cluster, it has been demonstrated that MCNP runs ten times as fast as on a single-processor CRAY YMP. In this paper, we summarize the implementation of a similar multitasking capability for the coupled electronphoton transport code system, the Integrated TIGER Series (ITS), and the evaluation of two load-balancing schemes for homogeneous and heterogeneous networks.« less

  9. Computational methods for fracture analysis of heavy-section steel technology (HSST) pressure vessel experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, B.R.; Bryan, R.H.; Bryson, J.W.

    This paper summarizes the capabilities and applications of the general-purpose and special-purpose computer programs that have been developed for use in fracture mechanics analyses of HSST pressure vessel experiments. Emphasis is placed on the OCA/USA code, which is designed for analysis of pressurized-thermal-shock (PTS) conditions, and on the ORMGEN/ADINA/ORVIRT system which is used for more general analysis. Fundamental features of these programs are discussed, along with applications to pressure vessel experiments.

  10. A project based on multi-configuration Dirac-Fock calculations for plasma spectroscopy

    NASA Astrophysics Data System (ADS)

    Comet, M.; Pain, J.-C.; Gilleron, F.; Piron, R.

    2017-09-01

    We present a project dedicated to hot plasma spectroscopy based on a Multi-Configuration Dirac-Fock (MCDF) code, initially developed by J. Bruneau. The code is briefly described and the use of the transition state method for plasma spectroscopy is detailed. Then an opacity code for local-thermodynamic-equilibrium plasmas using MCDF data, named OPAMCDF, is presented. Transition arrays for which the number of lines is too large to be handled in a Detailed Line Accounting (DLA) calculation can be modeled within the Partially Resolved Transition Array method or using the Unresolved Transition Arrays formalism in jj-coupling. An improvement of the original Partially Resolved Transition Array method is presented which gives a better agreement with DLA computations. Comparisons with some absorption and emission experimental spectra are shown. Finally, the capability of the MCDF code to compute atomic data required for collisional-radiative modeling of plasma at non local thermodynamic equilibrium is illustrated. In addition to photoexcitation, this code can be used to calculate photoionization, electron impact excitation and ionization cross-sections as well as autoionization rates in the Distorted-Wave or Close Coupling approximations. Comparisons with cross-sections and rates available in the literature are discussed.

  11. Prediction of effects of wing contour modifications on low-speed maximum lift and transonic performance for the EA-6B aircraft

    NASA Technical Reports Server (NTRS)

    Allison, Dennis O.; Waggoner, E. G.

    1990-01-01

    Computational predictions of the effects of wing contour modifications on maximum lift and transonic performance were made and verified against low speed and transonic wind tunnel data. This effort was part of a program to improve the maneuvering capability of the EA-6B electronics countermeasures aircraft, which evolved from the A-6 attack aircraft. The predictions were based on results from three computer codes which all include viscous effects: MCARF, a 2-D subsonic panel code; TAWFIVE, a transonic full potential code; and WBPPW, a transonic small disturbance potential flow code. The modifications were previously designed with the aid of these and other codes. The wing modifications consists of contour changes to the leading edge slats and trailing edge flaps and were designed for increased maximum lift with minimum effect on transonic performance. The prediction of the effects of the modifications are presented, with emphasis on verification through comparisons with wind tunnel data from the National Transonic Facility. Attention is focused on increments in low speed maximum lift and increments in transonic lift, pitching moment, and drag resulting from the contour modifications.

  12. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  13. Quick reproduction of blast-wave flow-field properties of nuclear, TNT, and ANFO explosions

    NASA Astrophysics Data System (ADS)

    Groth, C. P. T.

    1986-04-01

    In many instances, extensive blast-wave flow-field properties are required in gasdynamics research studies of blast-wave loading and structure response, and in evaluating the effects of explosions on their environment. This report provides a very useful computer code, which can be used in conjunction with the DNA Nuclear Blast Standard subroutines and code, to quickly reconstruct complete and fairly accurate blast-wave data for almost any free-air (spherical) and surface-burst (hemispherical) nuclear, trinitrotoluene (TNT), or ammonium nitrate-fuel oil (ANFO) explosion. This code is capable of computing all of the main flow properties as functions of radius and time, as well as providing additional information regarding air viscosity, reflected shock-wave properties, and the initial decay of the flow properties just behind the shock front. Both spatial and temporal distributions of the major blast-wave flow properties are also made readily available. Finally, provisions are also included in the code to provide additional information regarding the peak or shock-front flow properties over a range of radii, for a specific explosion of interest.

  14. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  15. On fully three-dimensional resistive wall mode and feedback stabilization computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strumberger, E.; Merkel, P.; Sempf, M.

    2008-05-15

    Resistive walls, located close to the plasma boundary, reduce the growth rates of external kink modes to resistive time scales. For such slowly growing resistive wall modes, the stabilization by an active feedback system becomes feasible. The fully three-dimensional stability code STARWALL, and the feedback optimization code OPTIM have been developed [P. Merkel and M. Sempf, 21st IAEA Fusion Energy Conference 2006, Chengdu, China (International Atomic Energy Agency, Vienna, 2006, paper TH/P3-8] to compute the growth rates of resistive wall modes in the presence of nonaxisymmetric, multiply connected wall structures and to model the active feedback stabilization of these modes.more » In order to demonstrate the capabilities of the codes and to study the effect of the toroidal mode coupling caused by multiply connected wall structures, the codes are applied to test equilibria using the resistive wall structures currently under debate for ITER [M. Shimada et al., Nucl. Fusion 47, S1 (2007)] and ASDEX Upgrade [W. Koeppendoerfer et al., Proceedings of the 16th Symposium on Fusion Technology, London, 1990 (Elsevier, Amsterdam, 1991), Vol. 1, p. 208].« less

  16. Code Modernization of VPIC

    NASA Astrophysics Data System (ADS)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  17. Three-dimensional multigrid Navier-Stokes computations for turbomachinery applications

    NASA Astrophysics Data System (ADS)

    Subramanian, S. V.

    1989-07-01

    The fully three-dimensional, time-dependent compressible Navier-Stokes equations in cylindrical coordinates are presently used, in conjunction with the multistage Runge-Kutta numerical integration scheme for solution of the governing flow equations, to simulate complex flowfields within turbomechanical components whose pertinent effects encompass those of viscosity, compressibility, blade rotation, and tip clearance. Computed results are presented for selected cascades, emphasizing the code's capabilities in the accurate prediction of such features as airfoil loadings, exit flow angles, shocks, and secondary flows. Computations for several test cases have been performed on a Cray-YMP, using nearly 90,000 grid points.

  18. The investigation of tethered satellite system dynamics

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1984-01-01

    Tethered satellite system (TSS) dynamics were studied. The dynamic response of the TSS during the entire stationkeeping phase for the first electrodynamic mission was investigated. An out of plane swing amplitude and the tether's bowing were observed. The dynamics of the slack tether was studied and computer code, SLACK2, was improved both in capabilities and computational speed. Speed hazard related to tether breakage or plasma contactor failure was examined. Preliminary values of the potential difference after the failure and of the drop of the electric field along the tether axis have been computed. The update of the satellite rotational dynamics model is initiated.

  19. A Perspective on Computational Aerothermodynamics at NASA

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    The evolving role of computational aerothermodynamics (CA) within NASA over the past 20 years is reviewed. The presentation highlights contributions to understanding the Space Shuttle pitching moment anomaly observed in the first shuttle flight, prediction of a static instability for Mars Pathfinder, and the use of CA for damage assessment in post-Columbia mission support. In the view forward, several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified.

  20. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  1. An Idealized, Single Radial Swirler, Lean-Direct-Injection (LDI) Concept Meshing Script

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony C.; Thompson, Daniel

    2008-01-01

    To easily study combustor design parameters using computational fluid dynamics codes (CFD), a Gridgen Glyph-based macro (based on the Tcl scripting language) dubbed BladeMaker has been developed for the meshing of an idealized, single radial swirler, lean-direct-injection (LDI) combustor. BladeMaker is capable of taking in a number of parameters, such as blade width, blade tilt with respect to the perpendicular, swirler cup radius, and grid densities, and producing a three-dimensional meshed radial swirler with a can-annular (canned) combustor. This complex script produces a data format suitable for but not specific to the National Combustion Code (NCC), a state-of-the-art CFD code developed for reacting flow processes.

  2. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  3. Unsteady transonic potential flow over a flexible fuselage

    NASA Technical Reports Server (NTRS)

    Gibbons, Michael D.

    1993-01-01

    A flexible fuselage capability has been developed and implemented within version 1.2 of the CAP-TSD code. The capability required adding time dependent terms to the fuselage surface boundary conditions and the fuselage surface pressure coefficient. The new capability will allow modeling the effect of a flexible fuselage on the aeroelastic stability of complex configurations. To assess the flexible fuselage capability several steady and unsteady calculations have been performed for slender fuselages with circular cross-sections. Steady surface pressures are compared with experiment at transonic flight conditions. Unsteady cross-sectional lift is compared with other analytical results at a low subsonic speed and a transonic case has been computed. The comparisons demonstrate the accuracy of the flexible fuselage modifications.

  4. Time-Dependent Simulations of Turbopump Flows

    NASA Technical Reports Server (NTRS)

    Kris, Cetin C.; Kwak, Dochan

    2001-01-01

    The objective of the current effort is to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine, including high-fidelity unsteady turbopump flow analysis. This capability is needed to support the design of pump sub-systems for advanced space transportation vehicles that are likely to involve liquid propulsion systems. To date, computational tools for design/analysis of turbopump flows are based on relatively lower fidelity methods. An unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available for real-world engineering applications. The present effort will provide developers with information such as transient flow phenomena at start up, impact of non-uniform inflows, system vibration and impact on the structure. In the proposed paper, the progress toward the capability of complete simulation of the turbo-pump for a liquid rocket engine is reported. The Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. The relative motion of the grid systems for the rotor-stator interaction was obtained using overset grid techniques. Time-accuracy of the scheme has been evaluated with simple test cases. Unsteady computations for the SSME turbo-pump, which contains 114 zones with 34.5 million grid points, are carried out on Origin 2000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability will be presented along with the performance of parallel versions of the code.

  5. High temperature composite analyzer (HITCAN) user's manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Lackney, J. J.; Singhal, S. N.; Murthy, P. L. N.; Gotsis, P.

    1993-01-01

    This manual describes 'how-to-use' the computer code, HITCAN (HIgh Temperature Composite ANalyzer). HITCAN is a general purpose computer program for predicting nonlinear global structural and local stress-strain response of arbitrarily oriented, multilayered high temperature metal matrix composite structures. This code combines composite mechanics and laminate theory with an internal data base for material properties of the constituents (matrix, fiber and interphase). The thermo-mechanical properties of the constituents are considered to be nonlinearly dependent on several parameters including temperature, stress and stress rate. The computation procedure for the analysis of the composite structures uses the finite element method. HITCAN is written in FORTRAN 77 computer language and at present has been configured and executed on the NASA Lewis Research Center CRAY XMP and YMP computers. This manual describes HlTCAN's capabilities and limitations followed by input/execution/output descriptions and example problems. The input is described in detail including (1) geometry modeling, (2) types of finite elements, (3) types of analysis, (4) material data, (5) types of loading, (6) boundary conditions, (7) output control, (8) program options, and (9) data bank.

  6. A programming environment for distributed complex computing. An overview of the Framework for Interdisciplinary Design Optimization (FIDO) project. NASA Langley TOPS exhibit H120b

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.

    1993-01-01

    The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.

  7. Seals Code Development Workshop

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)

    1996-01-01

    Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.

  8. WIND Flow Solver Released

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.

    1999-01-01

    The WIND code is a general-purpose, structured, multizone, compressible flow solver that can be used to analyze steady or unsteady flow for a wide range of geometric configurations and over a wide range of flow conditions. WIND is the latest product of the NPARC Alliance, a formal partnership between the NASA Lewis Research Center and the Air Force Arnold Engineering Development Center (AEDC). WIND Version 1.0 was released in February 1998, and Version 2.0 will be released in February 1999. The WIND code represents a merger of the capabilities of three existing computational fluid dynamics codes--NPARC (the original NPARC Alliance flow solver), NXAIR (an Air Force code used primarily for unsteady store separation problems), and NASTD (the primary flow solver at McDonnell Douglas, now part of Boeing).

  9. TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow

    NASA Technical Reports Server (NTRS)

    Chang, J. F.; Lan, C. Edward

    1987-01-01

    The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorson, L.D.

    A description is given of a new version of the TRUMP (UCRL-14754) computer code, NOTRUMP, which runs on both the CDC-7600 and CRAY-1. There are slight differences in the input and major changes in output capability. A postprocessor, AFTER, is available to manipulate some of the new output features. Old data decks for TRUMP will normally run with only minor changes.

  11. Coupled multi-disciplinary composites behavior simulation

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.; Murthy, Pappu L. N.; Chamis, Christos C.

    1993-01-01

    The capabilities of the computer code CSTEM (Coupled Structural/Thermal/Electro-Magnetic Analysis) are discussed and demonstrated. CSTEM computationally simulates the coupled response of layered multi-material composite structures subjected to simultaneous thermal, structural, vibration, acoustic, and electromagnetic loads and includes the effect of aggressive environments. The composite material behavior and structural response is determined at its various inherent scales: constituents (fiber/matrix), ply, laminate, and structural component. The thermal and mechanical properties of the constituents are considered to be nonlinearly dependent on various parameters such as temperature and moisture. The acoustic and electromagnetic properties also include dependence on vibration and electromagnetic wave frequencies, respectively. The simulation is based on a three dimensional finite element analysis in conjunction with composite mechanics and with structural tailoring codes, and with acoustic and electromagnetic analysis methods. An aircraft engine composite fan blade is selected as a typical structural component to demonstrate the CSTEM capabilities. Results of various coupled multi-disciplinary heat transfer, structural, vibration, acoustic, and electromagnetic analyses for temperature distribution, stress and displacement response, deformed shape, vibration frequencies, mode shapes, acoustic noise, and electromagnetic reflection from the fan blade are discussed for their coupled effects in hot and humid environments. Collectively, these results demonstrate the effectiveness of the CSTEM code in capturing the coupled effects on the various responses of composite structures subjected to simultaneous multiple real-life loads.

  12. Xyce Parallel Electronic Simulator Users' Guide Version 6.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  13. Applications of Computational Methods for Dynamic Stability and Control Derivatives

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Spence, Angela M.

    2004-01-01

    Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

  14. Aeras: A next generation global atmosphere model

    DOE PAGES

    Spotz, William F.; Smith, Thomas M.; Demeshko, Irina P.; ...

    2015-06-01

    Sandia National Laboratories is developing a new global atmosphere model named Aeras that is performance portable and supports the quantification of uncertainties. These next-generation capabilities are enabled by building Aeras on top of Albany, a code base that supports the rapid development of scientific application codes while leveraging Sandia's foundational mathematics and computer science packages in Trilinos and Dakota. Embedded uncertainty quantification (UQ) is an original design capability of Albany, and performance portability is a recent upgrade. Other required features, such as shell-type elements, spectral elements, efficient explicit and semi-implicit time-stepping, transient sensitivity analysis, and concurrent ensembles, were not componentsmore » of Albany as the project began, and have been (or are being) added by the Aeras team. We present early UQ and performance portability results for the shallow water equations.« less

  15. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  16. FAVOR: A new fracture mechanics code for reactor pressure vessels subjected to pressurized thermal shock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.

    1993-01-01

    This report discusses probabilistic fracture mechanics (PFM) analysis which is a major element of the comprehensive probabilistic methodology endorsed by the NRC for evaluation of the integrity of Pressurized Water Reactor (PWR) pressure vessels subjected to pressurized-thermal-shock (PTS) transients. It is anticipated that there will be an increasing need for an improved and validated PTS PFM code which is accepted by the NRC and utilities, as more plants approach the PTS screening criteria and are required to perform plant-specific analyses. The NRC funded Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratories is currently developing the FAVOR (Fracturemore » Analysis of Vessels: Oak Ridge) PTS PFM code, which is intended to meet this need. The FAVOR code incorporates the most important features of both OCA-P and VISA-II and contains some new capabilities such as PFM global modeling methodology, the capability to approximate the effects of thermal streaming on circumferential flaws located inside a plume region created by fluid and thermal stratification, a library of stress intensity factor influence coefficients, generated by the NQA-1 certified ABAQUS computer code, for an adequate range of two and three dimensional inside surface flaws, the flexibility to generate a variety of output reports, and user friendliness.« less

  17. FAVOR: A new fracture mechanics code for reactor pressure vessels subjected to pressurized thermal shock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, T.L.

    1993-04-01

    This report discusses probabilistic fracture mechanics (PFM) analysis which is a major element of the comprehensive probabilistic methodology endorsed by the NRC for evaluation of the integrity of Pressurized Water Reactor (PWR) pressure vessels subjected to pressurized-thermal-shock (PTS) transients. It is anticipated that there will be an increasing need for an improved and validated PTS PFM code which is accepted by the NRC and utilities, as more plants approach the PTS screening criteria and are required to perform plant-specific analyses. The NRC funded Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratories is currently developing the FAVOR (Fracturemore » Analysis of Vessels: Oak Ridge) PTS PFM code, which is intended to meet this need. The FAVOR code incorporates the most important features of both OCA-P and VISA-II and contains some new capabilities such as PFM global modeling methodology, the capability to approximate the effects of thermal streaming on circumferential flaws located inside a plume region created by fluid and thermal stratification, a library of stress intensity factor influence coefficients, generated by the NQA-1 certified ABAQUS computer code, for an adequate range of two and three dimensional inside surface flaws, the flexibility to generate a variety of output reports, and user friendliness.« less

  18. Neoclassical transport in toroidal plasmas with nonaxisymmetric flux surfaces

    DOE PAGES

    Belli, Emily A.; Candy, Jefferey M.

    2015-04-15

    The capability to treat nonaxisymmetric flux surface geometry has been added to the drift-kinetic code NEO. Geometric quantities (i.e. metric elements) are supplied by a recently-developed local 3D equilibrium solver, allowing neoclassical transport coefficients to be systematically computed while varying the 3D plasma shape in a simple and intuitive manner. Code verification is accomplished via detailed comparison with 3D Pfirsch–Schlüter theory. A discussion of the various collisionality regimes associated with 3D transport is given, with an emphasis on non-ambipolar particle flux, neoclassical toroidal viscosity, energy flux and bootstrap current. As a result, we compute the transport in the presence ofmore » ripple-type perturbations in a DIII-D-like H-mode edge plasma.« less

  19. Manned systems utilization analysis (study 2.1). Volume 5: Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1975-01-01

    The LOVES computer code developed to investigate the concept of space servicing operational satellites as an alternative to replacing expendable satellites or returning satellites to earth for ground refurbishment is presented. In addition to having the capability to simulate the expendable satellite operation and the ground refurbished satellite operation, the program is designed to simulate the logistics of space servicing satellites using an upper stage vehicle and/or the earth to orbit shuttle. The program not only provides for the initial deployment of the satellite but also simulates the random failure and subsequent replacement of various equipment modules comprising the satellite. The program has been used primarily to conduct trade studies and/or parametric studies of various space program operational philosophies.

  20. The Electric Propulsion Interactions Code (EPIC)

    NASA Technical Reports Server (NTRS)

    Mikellides, I. G.; Mandell, M. J.; Kuharski, R. A.; Davis, V. A.; Gardner, B. M.; Minor, J.

    2004-01-01

    Science Applications International Corporation is currently developing the Electric Propulsion Interactions Code, EPIC, as part of a project sponsored by the Space Environments and Effects Program at the NASA Marshall Space Flight Center. Now in its second year of development, EPIC is an interactive computer tool that allows the construction of a 3-D spacecraft model, and the assessment of a variety of interactions between its subsystems and the plume from an electric thruster. These interactions may include erosion of surfaces due to sputtering and re-deposition of sputtered materials, surface heating, torque on the spacecraft, and changes in surface properties due to erosion and deposition. This paper describes the overall capability of EPIC and provides an outline of the physics and algorithms that comprise many of its computational modules.

  1. Simulating the Thermal Response of High Explosives on Time Scales of Days to Microseconds

    NASA Astrophysics Data System (ADS)

    Yoh, Jack J.; McClelland, Matthew A.

    2004-07-01

    We present an overview of computational techniques for simulating the thermal cookoff of high explosives using a multi-physics hydrodynamics code, ALE3D. Recent improvements to the code have aided our computational capability in modeling the response of energetic materials systems exposed to extreme thermal environments, such as fires. We consider an idealized model process for a confined explosive involving the transition from slow heating to rapid deflagration in which the time scale changes from days to hundreds of microseconds. The heating stage involves thermal expansion and decomposition according to an Arrhenius kinetics model while a pressure-dependent burn model is employed during the explosive phase. We describe and demonstrate the numerical strategies employed to make the transition from slow to fast dynamics.

  2. NSR&D FY17 Report: CartaBlanca Capability Enhancements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Christopher Curtis; Dhakal, Tilak Raj; Zhang, Duan Zhong

    Over the last several years, particle technology in the CartaBlanca code has been matured and has been successfully applied to a wide variety of physical problems. It has been shown that the particle methods, especially Los Alamos's dual domain material point method, is capable of computing many problems involves complex physics, chemistries accompanied by large material deformations, where the traditional finite element or Eulerian method encounter significant difficulties. In FY17, the CartaBlanca code has been enhanced with physical models and numerical algorithms. We started out to compute penetration and HE safety problems. Most of the year we focused on themore » TEPLA model improvement testing against the sweeping wave experiment by Gray et al., because it was found that pore growth and material failure are essentially important for our tasks and needed to be understood for modeling the penetration and the can experiments efficiently. We extended the TEPLA mode from the point view of ensemble phase average to include the effects of nite deformation. It is shown that the assumed pore growth model in TEPLA is actually an exact result from the theory. Alone this line, we then generalized the model to include finite deformations to consider nonlinear dynamics of large deformation. The interaction between the HE product gas and the solid metal is based on the multi-velocity formation. Our preliminary numerical results suggest good agreement between the experiment and the numerical results, pending further verification. To improve the parallel processing capabilities of the CartaBlanca code, we are actively working with the Next Generation Code (NGC) project to rewrite selected packages using C++. This work is expected to continue in the following years. This effort also makes the particle technology developed with CartaBlanca project available to other part of the laboratory. Working with the NGC project and rewriting some parts of the code also given us an opportunity to improve our numerical implementations of the method and to take advantage of recently advances in the numerical methods, such as multiscale algorithms.« less

  3. THR-TH: a high-temperature gas-cooled nuclear reactor core thermal hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1984-07-01

    The ORNL version of PEBBLE, the (RZ) pebble bed thermal hydraulics code, has been extended for application to a prismatic gas cooled reactor core. The supplemental treatment is of one-dimensional coolant flow in up to a three-dimensional core description. Power density data from a neutronics and exposure calculation are used as the basic information for the thermal hydraulics calculation of heat removal. Two-dimensional neutronics results may be expanded for a three-dimensional hydraulics calculation. The geometric description for the hydraulics problem is the same as used by the neutronics code. A two-dimensional thermal cell model is used to predict temperatures inmore » the fuel channel. The capability is available in the local BOLD VENTURE computation system for reactor core analysis with capability to account for the effect of temperature feedback by nuclear cross section correlation. Some enhancements have also been added to the original code to add pebble bed modeling flexibility and to generate useful auxiliary results. For example, an estimate is made of the distribution of fuel temperatures based on average and extreme conditions regularly calculated at a number of locations.« less

  4. Radar transponder antenna pattern analysis for the space shuttle

    NASA Technical Reports Server (NTRS)

    Radcliff, Roger

    1989-01-01

    In order to improve tracking capability, radar transponder antennas will soon be mounted on the Shuttle solid rocket boosters (SRB). These four antennas, each being identical cavity-backed helices operating at 5.765 GHz, will be mounted near the top of the SRB's, adjacent to the intertank portion of the external tank. The purpose is to calculate the roll-plane pattern (the plane perpendicular to the SRB axes and containing the antennas) in the presence of this complex electromagnetic environment. The large electrical size of this problem mandates an optical (asymptotic) approach. Development of a specific code for this application is beyond the scope of a summer fellowship; thus a general purpose code, the Numerical Electromagnetics Code - Basic Scattering Code, was chosen as the computational tool. This code is based on the modern Geometrical Theory of Diffraction, and allows computation of scattering of bodies composed of canonical problems such as plates and elliptic cylinders. Apertures mounted on a curved surface (the SRB) cannot be accomplished by the code, so an antenna model consisting of wires excited by a method of moments current input was devised that approximated the actual performance of the antennas. The improvised antenna model matched well with measurements taken at the MSFC range. The SRB's, the external tank, and the shuttle nose were modeled as circular cylinders, and the code was able to produce what is thought to be a reasonable roll-plane pattern.

  5. ATHENA 3D: A finite element code for ultrasonic wave propagation

    NASA Astrophysics Data System (ADS)

    Rose, C.; Rupin, F.; Fouquet, T.; Chassignole, B.

    2014-04-01

    The understanding of wave propagation phenomena requires use of robust numerical models. 3D finite element (FE) models are generally prohibitively time consuming. However, advances in computing processor speed and memory allow them to be more and more competitive. In this context, EDF R&D developed the 3D version of the well-validated FE code ATHENA2D. The code is dedicated to the simulation of wave propagation in all kinds of elastic media and in particular, heterogeneous and anisotropic materials like welds. It is based on solving elastodynamic equations in the calculation zone expressed in terms of stress and particle velocities. The particularity of the code relies on the fact that the discretization of the calculation domain uses a Cartesian regular 3D mesh while the defect of complex geometry can be described using a separate (2D) mesh using the fictitious domains method. This allows combining the rapidity of regular meshes computation with the capability of modelling arbitrary shaped defects. Furthermore, the calculation domain is discretized with a quasi-explicit time evolution scheme. Thereby only local linear systems of small size have to be solved. The final step to reduce the computation time relies on the fact that ATHENA3D has been parallelized and adapted to the use of HPC resources. In this paper, the validation of the 3D FE model is discussed. A cross-validation of ATHENA 3D and CIVA is proposed for several inspection configurations. The performances in terms of calculation time are also presented in the cases of both local computer and computation cluster use.

  6. High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.

    PubMed

    Schwiedrzik, Caspar M; Freiwald, Winrich A

    2017-09-27

    Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. High Performance Radiation Transport Simulations on TITAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M

    2012-01-01

    In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less

  8. A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)

    2002-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.

  9. Development of the NASA/FLAGRO computer program for analysis of airframe structures

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Shivakumar, V.; Newman, J. C., Jr.

    1994-01-01

    The NASA/FLAGRO (NASGRO) computer program was developed for fracture control analysis of space hardware and is currently the standard computer code in NASA, the U.S. Air Force, and the European Agency (ESA) for this purpose. The significant attributes of the NASGRO program are the numerous crack case solutions, the large materials file, the improved growth rate equation based on crack closure theory, and the user-friendly promptive input features. In support of the National Aging Aircraft Research Program (NAARP); NASGRO is being further developed to provide advanced state-of-the-art capability for damage tolerance and crack growth analysis of aircraft structural problems, including mechanical systems and engines. The project currently involves a cooperative development effort by NASA, FAA, and ESA. The primary tasks underway are the incorporation of advanced methodology for crack growth rate retardation resulting from spectrum loading and improved analysis for determining crack instability. Also, the current weight function solutions in NASGRO or nonlinear stress gradient problems are being extended to more crack cases, and the 2-d boundary integral routine for stress analysis and stress-intensity factor solutions is being extended to 3-d problems. Lastly, effort is underway to enhance the program to operate on personal computers and work stations in a Windows environment. Because of the increasing and already wide usage of NASGRO, the code offers an excellent mechanism for technology transfer for new fatigue and fracture mechanics capabilities developed within NAARP.

  10. RELAP-7 Development Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Zhao, Haihua; Gleicher, Frederick Nathan

    RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory, and is the next generation tool in the RELAP reactor safety/systems analysis application series. RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is beingmore » developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). During Fiscal Year (FY) 2015, the RELAP-7 code has been further improved with expanded capability to support boiling water reactor (BWR) and pressurized water reactor NPPs analysis. The accumulator model has been developed. The code has also been coupled with other MOOSE-based applications such as neutronics code RattleSnake and fuel performance code BISON to perform multiphysics analysis. A major design requirement for the implicit algorithm in RELAP-7 is that it is capable of second-order discretization accuracy in both space and time, which eliminates the traditional first-order approximation errors. The second-order temporal is achieved by a second-order backward temporal difference, and the one-dimensional second-order accurate spatial discretization is achieved with the Galerkin approximation of Lagrange finite elements. During FY-2015, we have done numerical verification work to verify that the RELAP-7 code indeed achieves 2nd-order accuracy in both time and space for single phase models at the system level.« less

  11. Addition of equilibrium air to an upwind Navier-Stokes code and other first steps toward a more generalized flow solver

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1991-01-01

    An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.

  12. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  13. Successful Completion of FY18/Q1 ASC L2 Milestone 6355: Electrical Analysis Calibration Workflow Capability Demonstration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copps, Kevin D.

    The Sandia Analysis Workbench (SAW) project has developed and deployed a production capability for SIERRA computational mechanics analysis workflows. However, the electrical analysis workflow capability requirements have only been demonstrated in early prototype states, with no real capability deployed for analysts’ use. This milestone aims to improve the electrical analysis workflow capability (via SAW and related tools) and deploy it for ongoing use. We propose to focus on a QASPR electrical analysis calibration workflow use case. We will include a number of new capabilities (versus today’s SAW), such as: 1) support for the XYCE code workflow component, 2) data managementmore » coupled to electrical workflow, 3) human-in-theloop workflow capability, and 4) electrical analysis workflow capability deployed on the restricted (and possibly classified) network at Sandia. While far from the complete set of capabilities required for electrical analysis workflow over the long term, this is a substantial first step toward full production support for the electrical analysts.« less

  14. Equivalent plate modeling for conceptual design of aircraft wing structures

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1995-01-01

    This paper describes an analysis method that generates conceptual-level design data for aircraft wing structures. A key requirement is that this data must be produced in a timely manner so that is can be used effectively by multidisciplinary synthesis codes for performing systems studies. Such a capability is being developed by enhancing an equivalent plate structural analysis computer code to provide a more comprehensive, robust and user-friendly analysis tool. The paper focuses on recent enhancements to the Equivalent Laminated Plate Solution (ELAPS) analysis code that significantly expands the modeling capability and improves the accuracy of results. Modeling additions include use of out-of-plane plate segments for representing winglets and advanced wing concepts such as C-wings along with a new capability for modeling the internal rib and spar structure. The accuracy of calculated results is improved by including transverse shear effects in the formulation and by using multiple sets of assumed displacement functions in the analysis. Typical results are presented to demonstrate these new features. Example configurations include a C-wing transport aircraft, a representative fighter wing and a blended-wing-body transport. These applications are intended to demonstrate and quantify the benefits of using equivalent plate modeling of wing structures during conceptual design.

  15. Advanced Aerospace Materials by Design

    NASA Technical Reports Server (NTRS)

    Srivastava, Deepak; Djomehri, Jahed; Wei, Chen-Yu

    2004-01-01

    The advances in the emerging field of nanophase thermal and structural composite materials; materials with embedded sensors and actuators for morphing structures; light-weight composite materials for energy and power storage; and large surface area materials for in-situ resource generation and waste recycling, are expected to :revolutionize the capabilities of virtually every system comprising of future robotic and :human moon and mars exploration missions. A high-performance multiscale simulation platform, including the computational capabilities and resources of Columbia - the new supercomputer, is being developed to discover, validate, and prototype next generation (of such advanced materials. This exhibit will describe the porting and scaling of multiscale 'physics based core computer simulation codes for discovering and designing carbon nanotube-polymer composite materials for light-weight load bearing structural and 'thermal protection applications.

  16. Numerical modeling of separated flows at moderate Reynolds numbers appropriate for turbine blades and unmanned aero vehicles

    NASA Astrophysics Data System (ADS)

    Castiglioni, Giacomo

    Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.

  17. Chemical vapor deposition fluid flow simulation modelling tool

    NASA Technical Reports Server (NTRS)

    Bullister, Edward T.

    1992-01-01

    Accurate numerical simulation of chemical vapor deposition (CVD) processes requires a general purpose computational fluid dynamics package combined with specialized capabilities for high temperature chemistry. In this report, we describe the implementation of these specialized capabilities in the spectral element code NEKTON. The thermal expansion of the gases involved is shown to be accurately approximated by the low Mach number perturbation expansion of the incompressible Navier-Stokes equations. The radiative heat transfer between multiple interacting radiating surfaces is shown to be tractable using the method of Gebhart. The disparate rates of reaction and diffusion in CVD processes are calculated via a point-implicit time integration scheme. We demonstrate the use above capabilities on prototypical CVD applications.

  18. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less

  19. Common computational properties found in natural sensory systems

    NASA Astrophysics Data System (ADS)

    Brooks, Geoffrey

    2009-05-01

    Throughout the animal kingdom there are many existing sensory systems with capabilities desired by the human designers of new sensory and computational systems. There are a few basic design principles constantly observed among these natural mechano-, chemo-, and photo-sensory systems, principles that have been proven by the test of time. Such principles include non-uniform sampling and processing, topological computing, contrast enhancement by localized signal inhibition, graded localized signal processing, spiked signal transmission, and coarse coding, which is the computational transformation of raw data using broadly overlapping filters. These principles are outlined here with references to natural biological sensory systems as well as successful biomimetic sensory systems exploiting these natural design concepts.

  20. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  1. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  2. SPLASH program for three dimensional fluid dynamics with free surface boundaries

    NASA Astrophysics Data System (ADS)

    Yamaguchi, A.

    1996-05-01

    This paper describes a three dimensional computer program SPLASH that solves Navier-Stokes equations based on the Arbitrary Lagrangian Eulerian (ALE) finite element method. SPLASH has been developed for application to the fluid dynamics problems including the moving boundary of a liquid metal cooled Fast Breeder Reactor (FBR). To apply SPLASH code to the free surface behavior analysis, a capillary model using a cubic Spline function has been developed. Several sample problems, e.g., free surface oscillation, vortex shedding development, and capillary tube phenomena, are solved to verify the computer program. In the analyses, the numerical results are in good agreement with the theoretical value or experimental observance. Also SPLASH code has been applied to an analysis of a free surface sloshing experiment coupled with forced circulation flow in a rectangular tank. This is a simplified situation of the flow field in a reactor vessel of the FBR. The computational simulation well predicts the general behavior of the fluid flow inside and the free surface behavior. Analytical capability of the SPLASH code has been verified in this study and the application to more practical problems such as FBR design and safety analysis is under way.

  3. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  4. TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.

    1994-01-01

    The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  6. Slow Crack Growth and Fatigue Life Prediction of Ceramic Components Subjected to Variable Load History

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama

    2001-01-01

    Present capabilities of the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code has the capability to compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth (SCG) type failure conditions CARES/Life can handle the cases of sustained and linearly increasing time-dependent loads, while for cyclic fatigue applications various types of repetitive constant amplitude loads can be accounted for. In real applications applied loads are rarely that simple, but rather vary with time in more complex ways such as, for example, engine start up, shut down, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. The objective of this paper is to demonstrate a methodology capable of predicting the time-dependent reliability of components subjected to transient thermomechanical loads that takes into account the change in material response with time. In this paper, the dominant delayed failure mechanism is assumed to be SCG. This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code, which has also been modified to have the ability of interfacing with commercially available FEA codes executed for transient load histories. An example involving a ceramic exhaust valve subjected to combustion cycle loads is presented to demonstrate the viability of this methodology and the CARES/Life program.

  7. Development of a CRAY 1 version of the SINDA program. [thermo-structural analyzer program

    NASA Technical Reports Server (NTRS)

    Juba, S. M.; Fogerson, P. E.

    1982-01-01

    The SINDA thermal analyzer program was transferred from the UNIVAC 1110 computer to a CYBER And then to a CRAY 1. Significant changes to the code of the program were required in order to execute efficiently on the CYBER and CRAY. The program was tested on the CRAY using a thermal math model of the shuttle which was too large to run on either the UNIVAC or CYBER. An effort was then begun to further modify the code of SINDA in order to make effective use of the vector capabilities of the CRAY.

  8. Application of numerical methods to heat transfer and thermal stress analysis of aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Wieting, A. R.

    1979-01-01

    The paper describes a thermal-structural design analysis study of a fuel-injection strut for a hydrogen-cooled scramjet engine for a supersonic transport, utilizing finite-element methodology. Applications of finite-element and finite-difference codes to the thermal-structural design-analysis of space transports and structures are discussed. The interaction between the thermal and structural analyses has led to development of finite-element thermal methodology to improve the integration between these two disciplines. The integrated thermal-structural analysis capability developed within the framework of a computer code is outlined.

  9. Steady-State Solution of a Flexible Wing

    NASA Technical Reports Server (NTRS)

    Karkehabadi, Reza; Chandra, Suresh; Krishnamurthy, Ramesh

    1997-01-01

    A fluid-structure interaction code, ENSAERO, has been used to compute the aerodynamic loads on a swept-tapered wing. The code has the capability of using Euler or Navier-Stokes equations. Both options have been used and compared in the present paper. In the calculation of the steady-state solution, we are interested in knowing how the flexibility of the wing influences the lift coefficients. If the results of a flexible wing are not affected by the flexibility of the wing significantly, one could consider the wing to be rigid and reduce the problem from fluid-structure interaction to a fluid problem.

  10. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  11. Developments in electron gun simulation

    NASA Astrophysics Data System (ADS)

    Herrmannsfeldt, W. B.

    1997-01-01

    This paper will discuss the developments in the electron gun simulation programs that are based on EGUN and its derivatives and supporting programs. Much of the code development has been inspired by technology changes in computer hardware; the implications on EGN2 of this evolution will be discussed. Some examples and a review of the capabilities of the EGUN family will be described.

  12. Developments in the electron gun simulation program, EGUN

    NASA Astrophysics Data System (ADS)

    Herrmannsfeldt, W. B.

    1994-11-01

    This paper discusses the developments in the electron gun simulation programs that are based on EGUN with its derivatives and supporting programs. Much of the code development has been inspired by technology changes in computer hardware; the implications of this evolution on EGN2 are discussed. Some examples and a review of the capabilities of the EGUN family are described.

  13. Developments in the electron gun simulation program, EGUN

    NASA Astrophysics Data System (ADS)

    Herrmannsfeldt, W. B.

    1995-07-01

    This paper discusses the developments in the electron gun simulation programs that are based on EGUN with its derivatives and supporting programs. Much of the code development has been inspired by technology changes in computer hardware; the implications of this evolution on EGN2 are discussed. Some examples and a review of the capabilities of the EGUN family are described.

  14. The Marshall Engineering Thermosphere (MET) Model. Volume 1; Technical Description

    NASA Technical Reports Server (NTRS)

    Smith, R. E.

    1998-01-01

    Volume 1 presents a technical description of the Marshall Engineering Thermosphere (MET) model atmosphere and a summary of its historical development. Various programs developed to augment the original capability of the model are discussed in detail. The report also describes each of the individual subroutines developed to enhance the model. Computer codes for these subroutines are contained in four appendices.

  15. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell W.

    2016-10-04

    In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methodsmore » in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.« less

  16. Advanced Applications of Adifor 3.0 for Efficient Calculation of First-and Second-Order CFD Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III

    2004-01-01

    This final report will document the accomplishments of the work of this project. 1) The incremental-iterative (II) form of the reverse-mode (adjoint) method for computing first-order (FO) aerodynamic sensitivity derivatives (SDs) has been successfully implemented and tested in a 2D CFD code (called ANSERS) using the reverse-mode capability of ADIFOR 3.0. These preceding results compared very well with similar SDS computed via a black-box (BB) application of the reverse-mode capability of ADIFOR 3.0, and also with similar SDs calculated via the method of finite differences. 2) Second-order (SO) SDs have been implemented in the 2D ASNWERS code using the very efficient strategy that was originally proposed (but not previously tested) of Reference 3, Appendix A. Furthermore, these SO SOs have been validated for accuracy and computational efficiency. 3) Studies were conducted in Quasi-1D and 2D concerning the smoothness (or lack of smoothness) of the FO and SO SD's for flows with shock waves. The phenomenon is documented in the publications of this study (listed subsequently), however, the specific numerical mechanism which is responsible for this unsmoothness phenomenon was not discovered. 4) The FO and SO derivatives for Quasi-1D and 2D flows were applied to predict aerodynamic design uncertainties, and were also applied in robust design optimization studies.

  17. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  18. CBP PHASE I CODE INTEGRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, F.; Brown, K.; Flach, G.

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material propertiesmore » via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.« less

  19. Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.

    2008-01-01

    This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.

  20. An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2007-01-01

    An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.

  1. Additional extensions to the NASCAP computer code, volume 2

    NASA Technical Reports Server (NTRS)

    Stannard, P. R.; Katz, I.; Mandell, M. J.

    1982-01-01

    Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.

  2. Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation

    NASA Astrophysics Data System (ADS)

    Ragni, Matteo

    There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.

  3. OVERFLOW-Interaction with Industry

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; George, Michael W. (Technical Monitor)

    1996-01-01

    A Navier-Stokes flow solver, OVERFLOW, has been developed by researchers at NASA Ames Research Center to use overset (Chimera) grids to simulate the flow about complex aerodynamic shapes. Primary customers of the OVERFLOW flow solver and related software include McDonnell Douglas and Boeing, as well as the NASA Focused Programs for Advanced Subsonic Technology (AST) and High Speed Research (HSR). Code development has focused on customer issues, including improving code performance, ability to run on workstation clusters and the NAS SP2, and direct interaction with industry on accuracy assessment and validation. Significant interaction with NAS has produced a capability tailored to the Ames computing environment, and code contributions have come from a wide range of sources, both within and outside Ames.

  4. Monte Carlo treatment planning for molecular targeted radiotherapy within the MINERVA system

    NASA Astrophysics Data System (ADS)

    Lehmann, Joerg; Hartmann Siantar, Christine; Wessol, Daniel E.; Wemple, Charles A.; Nigg, David; Cogliati, Josh; Daly, Tom; Descalle, Marie-Anne; Flickinger, Terry; Pletcher, David; DeNardo, Gerald

    2005-03-01

    The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU) and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (modality inclusive environment for radiotherapeutic variable analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plugin architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4—2%, MCNP—10%) (Descalle et al 2003 Cancer Biother. Radiopharm. 18 71-9). The code is currently being benchmarked against experimental data. The interpatient variability of the drug pharmacokinetics in MTR can only be properly accounted for by image-based, patient-specific treatment planning, as has been common in external beam radiation therapy for many years. MINERVA offers 3D Monte Carlo-based MTR treatment planning as its first integrated operational capability. The new MINERVA system will ultimately incorporate capabilities for a comprehensive list of radiation therapies. In progress are modules for external beam photon-electron therapy and boron neutron capture therapy (BNCT). Brachytherapy and proton therapy are planned. Through the open application programming interface (API), other groups can add their own modules and share them with the community.

  5. Finite element modelling of crash response of composite aerospace sub-floor structures

    NASA Astrophysics Data System (ADS)

    McCarthy, M. A.; Harte, C. G.; Wiggenraad, J. F. M.; Michielsen, A. L. P. J.; Kohlgrüber, D.; Kamoulakos, A.

    Composite energy-absorbing structures for use in aircraft are being studied within a European Commission research programme (CRASURV - Design for Crash Survivability). One of the aims of the project is to evaluate the current capabilities of crashworthiness simulation codes for composites modelling. This paper focuses on the computational analysis using explicit finite element analysis, of a number of quasi-static and dynamic tests carried out within the programme. It describes the design of the structures, the analysis techniques used, and the results of the analyses in comparison to the experimental test results. It has been found that current multi-ply shell models are capable of modelling the main energy-absorbing processes at work in such structures. However some deficiencies exist, particularly in modelling fabric composites. Developments within the finite element code are taking place as a result of this work which will enable better representation of composite fabrics.

  6. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  7. Parametric study on laminar flow for finite wings at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Garcia, Joseph Avila

    1994-01-01

    Laminar flow control has been identified as a key element in the development of the next generation of High Speed Transports. Extending the amount of laminar flow over an aircraft will increase range, payload, and altitude capabilities as well as lower fuel requirements, skin temperature, and therefore the overall cost. A parametric study to predict the extent of laminar flow for finite wings at supersonic speeds was conducted using a computational fluid dynamics (CFD) code coupled with a boundary layer stability code. The parameters investigated in this study were Reynolds number, angle of attack, and sweep. The results showed that an increase in angle of attack for specific Reynolds numbers can actually delay transition. Therefore, higher lift capability, caused by the increased angle of attack, as well as a reduction in viscous drag, due to the delay in transition, can be expected simultaneously. This results in larger payload and range.

  8. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  9. Design for progressive fracture in composite shell structures

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon; Murthy, Pappu L. N.

    1992-01-01

    The load carrying capability and structural behavior of composite shell structures and stiffened curved panels are investigated to provide accurate early design loads. An integrated computer code is utilized for the computational simulation of composite structural degradation under practical loading for realistic design. Damage initiation, growth, accumulation, and propagation to structural fracture are included in the simulation. Progressive fracture investigations providing design insight for several classes of composite shells are presented. Results demonstrate the significance of local defects, interfacial regions, and stress concentrations on the structural durability of composite shells.

  10. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  11. Geospace simulations using modern accelerator processor technology

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Raeder, J.; Larson, D. J.

    2009-12-01

    OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.

  12. Navier-Stokes analysis of cold scramjet-afterbody flows

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.

    1989-01-01

    The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.

  13. Interaction of external conditions with the internal flowfield in liquid rocket engines - A computational study

    NASA Technical Reports Server (NTRS)

    Trinh, H. P.; Gross, K. W.

    1989-01-01

    Computational studies have been conducted to examine the capability of a CFD code by simulating the steady state thrust chamber internal flow. The SSME served as the sample case, and significant parameter profiles are presented and discussed. Performance predictions from TDK, the recommended JANNAF reference computer program, are compared with those from PHOENICS to establish the credibility of its results. The investigation of an overexpanded nozzle flow is particularly addressed since it plays an important role in the area ratio selection of future rocket engines. Experience gained during this uncompleted flow separation study and future steps are outlined.

  14. Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport

    NASA Technical Reports Server (NTRS)

    Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K.

    2008-01-01

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions with either laboratory or space boundary conditions is currently under development. The computational model consists of combinations of physical perturbation expansions based on the scales of atomic interaction, multiple scattering, and nuclear reactive processes with use of the Neumann-asymptotic expansions with non-perturbative corrections. The code contains energy loss due to straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshifts. Previous reports show that the new code accurately models the transport of ion beams through a single slab of material. Current research efforts are focused on enabling the code to handle multiple layers of material and the present paper reports on progress made towards that end.

  15. The Space Telescope SI C&DH system. [Scientific Instrument Control and Data Handling Subsystem

    NASA Technical Reports Server (NTRS)

    Gadwal, Govind R.; Barasch, Ronald S.

    1990-01-01

    The Hubble Space Telescope Scientific Instrument Control and Data Handling Subsystem (SI C&DH) is designed to interface with five scientific instruments of the Space Telescope to provide ground and autonomous control and collect health and status information using the Standard Telemetry and Command Components (STACC) multiplex data bus. It also formats high throughput science data into packets. The packetized data is interleaved and Reed-Solomon encoded for error correction and Pseudo Random encoded. An inner convolutional coding with the outer Reed-Solomon coding provides excellent error correction capability. The subsystem is designed with the capacity for orbital replacement in order to meet a mission life of fifteen years. The spacecraft computer and the SI C&DH computer coordinate the activities of the spacecraft and the scientific instruments to achieve the mission objectives.

  16. Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin

    2003-01-01

    This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.

  17. Unsteady Turbopump Flow Simulations

    NASA Technical Reports Server (NTRS)

    Centin, Kiris C.; Kwak, Dochan

    2001-01-01

    The objective of the current effort is two-fold: 1) to provide a computational framework for design and analysis of the entire fuel supply system of a liquid rocket engine; and 2) to provide high-fidelity unsteady turbopump flow analysis capability to support the design of pump sub-systems for advanced space transportation vehicle. Since the space launch systems in the near future are likely to involve liquid propulsion system, increasing the efficiency and reliability of the turbopump components is an important task. To date, computational tools for design/analysis of turbopump flow are based on relatively lower fidelity methods. Unsteady, three-dimensional viscous flow analysis tool involving stationary and rotational components for the entire turbopump assembly has not been available, at least, for real-world engineering applications. Present effort is an attempt to provide this capability so that developers of the vehicle will be able to extract such information as transient flow phenomena for start up, impact of non-uniform inflow, system vibration and impact on the structure. Those quantities are not readily available from simplified design tools. In this presentation, the progress being made toward complete turbo-pump simulation capability for a liquid rocket engine is reported. Space Shuttle Main Engine (SSME) turbo-pump is used as a test case for the performance evaluation of the hybrid MPI/Open-MP and MLP versions of the INS3D code. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbopump, which contains 106 zones with 34.5 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from these time-accurate simulations with moving boundary capability and the performance of the parallel versions of the code will be presented.

  18. DSMC computations of hypersonic flow separation and re-attachment in the transition to continuum regime

    NASA Astrophysics Data System (ADS)

    Prakash, Ram; Gai, Sudhir L.; O'Byrne, Sean; Brown, Melrose

    2016-11-01

    The flow over a `tick' shaped configuration is performed using two Direct Simulation Monte Carlo codes: the DS2V code of Bird and the code from Sandia National Laboratory, called SPARTA. The configuration creates a flow field, where the flow is expanded initially but then is affected by the adverse pressure gradient induced by a compression surface. The flow field is challenging in the sense that the full flow domain is comprised of localized areas spanning continuum and transitional regimes. The present work focuses on the capability of SPARTA to model such flow conditions and also towards a comparative evaluation with results from DS2V. An extensive grid adaptation study is performed using both the codes on a model with a sharp leading edge and the converged results are then compared. The computational predictions are evaluated in terms of surface parameters such as heat flux, shear stress, pressure and velocity slip. SPARTA consistently predicts higher values for these surface properties. The skin friction predictions of both the codes don't give any indication of separation but the velocity slip plots indicate an incipient separation behavior at the corner. The differences in the results are attributed towards the flow resolution at the leading edge that dictates the downstream flow characteristics.

  19. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Eduardo; Abbott, Stephen; Koskela, Tuomas

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less

  20. Is it Code Imperfection or 'garbage in Garbage Out'? Outline of Experiences from a Comprehensive Adr Code Verification

    NASA Astrophysics Data System (ADS)

    Zamani, K.; Bombardelli, F. A.

    2013-12-01

    ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.

Top