A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.
1989-01-01
A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
NASA Technical Reports Server (NTRS)
Wang, C. R.; Towne, C. E.; Hippensteele, S. A.; Poinsatte, P. E.
1997-01-01
This study investigated the Navier-Stokes computations of the surface heat transfer coefficients of a transition duct flow. A transition duct from an axisymmetric cross section to a non-axisymmetric cross section, is usually used to connect the turbine exit to the nozzle. As the gas turbine inlet temperature increases, the transition duct is subjected to the high temperature at the gas turbine exit. The transition duct flow has combined development of hydraulic and thermal entry length. The design of the transition duct required accurate surface heat transfer coefficients. The Navier-Stokes computational method could be used to predict the surface heat transfer coefficients of a transition duct flow. The Proteus three-dimensional Navier-Stokes numerical computational code was used in this study. The code was first studied for the computations of the turbulent developing flow properties within a circular duct and a square duct. The code was then used to compute the turbulent flow properties of a transition duct flow. The computational results of the surface pressure, the skin friction factor, and the surface heat transfer coefficient were described and compared with their values obtained from theoretical analyses or experiments. The comparison showed that the Navier-Stokes computation could predict approximately the surface heat transfer coefficients of a transition duct flow.
LeRC-HT: NASA Lewis Research Center General Multiblock Navier-Stokes Heat Transfer Code Developed
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Gaugler, Raymond E.
1999-01-01
For the last several years, LeRC-HT, a three-dimensional computational fluid dynamics (CFD) computer code for analyzing gas turbine flow and convective heat transfer, has been evolving at the NASA Lewis Research Center. The code is unique in its ability to give a highly detailed representation of the flow field very close to solid surfaces. This is necessary for an accurate representation of fluid heat transfer and viscous shear stresses. The code has been used extensively for both internal cooling passage flows and hot gas path flows--including detailed film cooling calculations, complex tip-clearance gap flows, and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool (at least 35 technical papers have been published relative to the code and its application), but it should be useful for detailed design analysis. We now plan to make this code available to selected users for further evaluation.
A survey to identify the clinical coding and classification systems currently in use across Europe.
de Lusignan, S; Minmagh, C; Kennedy, J; Zeimet, M; Bommezijn, H; Bryant, J
2001-01-01
This is a survey to identify what clinical coding systems are currently in use across the European Union, and the states seeking membership to it. We sought to identify what systems are currently used and to what extent they were subject to local adaptation. Clinical coding should facilitate identifying key medical events in a computerised medical record, and aggregating information across groups of records. The emerging new driver is as the enabler of the life-long computerised medical record. A prerequisite for this level of functionality is the transfer of information between different computer systems. This transfer can be facilitated either by working on the interoperability problems between disparate systems or by harmonising the underlying data. This paper examines the extent to which the latter has occurred across Europe. Literature and Internet search. Requests for information via electronic mail to pan-European mailing lists of health informatics professionals. Coding systems are now a de facto part of health information systems across Europe. There are relatively few coding systems in existence across Europe. ICD9 and ICD 10, ICPC and Read were the most established. However the local adaptation of these classification systems either on a by country or by computer software manufacturer basis; significantly reduces the ability for the meaning coded with patients computer records to be easily transferred from one medical record system to another. There is no longer any debate as to whether a coding or classification system should be used. Convergence of different classifications systems should be encouraged. Countries and computer manufacturers within the EU should be encouraged to stop making local modifications to coding and classification systems, as this practice risks significantly slowing progress towards easy transfer of records between computer systems.
Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.; Lee, Chi-Miag (Technical Monitor)
2001-01-01
For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this paper, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery for space launch vehicle propulsion systems.
Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier-Stokes Heat Transfer Code
NASA Technical Reports Server (NTRS)
Gaugfer, Raymond E.
2002-01-01
For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid heat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.
Glenn-HT: The NASA Glenn Research Center General Multi-Block Navier Stokes Heat Transfer Code
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
2002-01-01
For the last several years, Glenn-HT, a three-dimensional (3D) Computational Fluid Dynamics (CFD) computer code for the analysis of gas turbine flow and convective heat transfer has been evolving at the NASA Glenn Research Center. The code is unique in the ability to give a highly detailed representation of the flow field very close to solid surfaces in order to get accurate representation of fluid beat transfer and viscous shear stresses. The code has been validated and used extensively for both internal cooling passage flow and for hot gas path flows, including detailed film cooling calculations and complex tip clearance gap flow and heat transfer. In its current form, this code has a multiblock grid capability and has been validated for a number of turbine configurations. The code has been developed and used primarily as a research tool, but it can be useful for detailed design analysis. In this presentation, the code is described and examples of its validation and use for complex flow calculations are presented, emphasizing the applicability to turbomachinery.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.
NASA Technical Reports Server (NTRS)
Chambers, Lin Hartung
1994-01-01
The theory for radiation emission, absorption, and transfer in a thermochemical nonequilibrium flow is presented. The expressions developed reduce correctly to the limit at equilibrium. To implement the theory in a practical computer code, some approximations are used, particularly the smearing of molecular radiation. Details of these approximations are presented and helpful information is included concerning the use of the computer code. This user's manual should benefit both occasional users of the Langley Optimized Radiative Nonequilibrium (LORAN) code and those who wish to use it to experiment with improved models or properties.
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
Py4CAtS - Python tools for line-by-line modelling of infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; García, Sebastián Gimeno
2013-05-01
Py4CAtS — Python scripts for Computational ATmospheric Spectroscopy is a Python re-implementation of the Fortran infrared radiative transfer code GARLIC, where compute-intensive code sections utilize the Numeric/Scientific Python modules for highly optimized array-processing. The individual steps of an infrared or microwave radiative transfer computation are implemented in separate scripts to extract lines of relevant molecules in the spectral range of interest, to compute line-by-line cross sections for given pressure(s) and temperature(s), to combine cross sections to absorption coefficients and optical depths, and to integrate along the line-of-sight to transmission and radiance/intensity. The basic design of the package, numerical and computational aspects relevant for optimization, and a sketch of the typical workflow are presented.
Computer code for predicting coolant flow and heat transfer in turbomachinery
NASA Technical Reports Server (NTRS)
Meitner, Peter L.
1990-01-01
A computer code was developed to analyze any turbomachinery coolant flow path geometry that consist of a single flow passage with a unique inlet and exit. Flow can be bled off for tip-cap impingement cooling, and a flow bypass can be specified in which coolant flow is taken off at one point in the flow channel and reintroduced at a point farther downstream in the same channel. The user may either choose the coolant flow rate or let the program determine the flow rate from specified inlet and exit conditions. The computer code integrates the 1-D momentum and energy equations along a defined flow path and calculates the coolant's flow rate, temperature, pressure, and velocity and the heat transfer coefficients along the passage. The equations account for area change, mass addition or subtraction, pumping, friction, and heat transfer.
FILM-30: A Heat Transfer Properties Code for Water Coolant
DOE Office of Scientific and Technical Information (OSTI.GOV)
MARSHALL, THERON D.
2001-02-01
A FORTRAN computer code has been written to calculate the heat transfer properties at the wetted perimeter of a coolant channel when provided the bulk water conditions. This computer code is titled FILM-30 and the code calculates its heat transfer properties by using the following correlations: (1) Sieder-Tate: forced convection, (2) Bergles-Rohsenow: onset to nucleate boiling, (3) Bergles-Rohsenow: partially developed nucleate boiling, (4) Araki: fully developed nucleate boiling, (5) Tong-75: critical heat flux (CHF), and (6) Marshall-98: transition boiling. FILM-30 produces output files that provide the heat flux and heat transfer coefficient at the wetted perimeter as a function ofmore » temperature. To validate FILM-30, the calculated heat transfer properties were used in finite element analyses to predict internal temperatures for a water-cooled copper mockup under one-sided heating from a rastered electron beam. These predicted temperatures were compared with the measured temperatures from the author's 1994 and 1998 heat transfer experiments. There was excellent agreement between the predicted and experimentally measured temperatures, which confirmed the accuracy of FILM-30 within the experimental range of the tests. FILM-30 can accurately predict the CHF and transition boiling regimes, which is an important advantage over current heat transfer codes. Consequently, FILM-30 is ideal for predicting heat transfer properties for applications that feature high heat fluxes produced by one-sided heating.« less
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Terrestrial solar spectral modeling. [SOLTRAN, BRITE, and FLASH codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bird, R.E.
The utility of accurate computer codes for calculating the solar spectral irradiance under various atmospheric conditions was recognized. New absorption and extraterrestrial spectral data are introduced. Progress is made in radiative transfer modeling outside of the solar community, especially for space and military applications. Three rigorous radiative transfer codes SOLTRAN, BRITE, and FLASH are employed. The SOLTRAN and BRITE codes are described and results from their use are presented.
MINIVER: Miniature version of real/ideal gas aero-heating and ablation computer program
NASA Technical Reports Server (NTRS)
Hendler, D. R.
1976-01-01
Computer code is used to determine heat transfer multiplication factors, special flow field simulation techniques, different heat transfer methods, different transition criteria, crossflow simulation, and more efficient thin skin thickness optimization procedure.
Validation of CFD/Heat Transfer Software for Turbine Blade Analysis
NASA Technical Reports Server (NTRS)
Kiefer, Walter D.
2004-01-01
I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine are considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analysis. The inviscid, quasi three dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous three dimensional internal flow cade for the momentum and energy equation. These boundary conditions are input to a three dimensional heat conduction code for the calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results are given.
Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1994-01-01
Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
Transferring ecosystem simulation codes to supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1995-01-01
Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.
NASA Technical Reports Server (NTRS)
Russell, Louis M.; Thurman, Douglas R.; Simonyi, Patricia S.; Hippensteele, Steven A.; Poinsatte, Philip E.
1993-01-01
Visual and quantitative information was obtained on heat transfer and flow in a branched-duct test section that had several significant features of an internal cooling passage of a turbine blade. The objective of this study was to generate a set of experimental data that could be used to validate computer codes for internal cooling systems. Surface heat transfer coefficients and entrance flow conditions were measured at entrance Reynolds numbers of 45,000, 335,000, and 726,000. The heat transfer data were obtained using an Inconel heater sheet attached to the surface and coated with liquid crystals. Visual and quantitative flow field results using particle image velocimetry were also obtained for a plane at mid channel height for a Reynolds number of 45,000. The flow was seeded with polystyrene particles and illuminated by a laser light sheet. Computational results were determined for the same configurations and at matching Reynolds numbers; these surface heat transfer coefficients and flow velocities were computed with a commercially available code. The experimental and computational results were compared. Although some general trends did agree, there were inconsistencies in the temperature patterns as well as in the numerical results. These inconsistencies strongly suggest the need for further computational studies on complicated geometries such as the one studied.
A code for optically thick and hot photoionized media
NASA Astrophysics Data System (ADS)
Dumont, A.-M.; Abrassart, A.; Collin, S.
2000-05-01
We describe a code designed for hot media (T >= a few 104 K), optically thick to Compton scattering. It computes the structure of a plane-parallel slab of gas in thermal and ionization equilibrium, illuminated on one or on both sides by a given spectrum. Contrary to the other photoionization codes, it solves the transfer of the continuum and of the lines in a two stream approximation, without using the local escape probability formalism to approximate the line transfer. We stress the importance of taking into account the returning flux even for small column densities (1022 cm-2), and we show that the escape probability approximation can lead to strong errors in the thermal and ionization structure, as well as in the emitted spectrum, for a Thomson thickness larger than a few tenths. The transfer code is coupled with a Monte Carlo code which allows to take into account Compton and inverse Compton diffusions, and to compute the spectrum emitted up to MeV energies, in any geometry. Comparisons with cloudy show that it gives similar results for small column densities. Several applications are mentioned.
Application of the TEMPEST computer code to canister-filling heat transfer problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.
Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch fillingmore » mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs.« less
Analysis of film cooling in rocket nozzles
NASA Technical Reports Server (NTRS)
Woodbury, Keith A.
1992-01-01
Computational Fluid Dynamics (CFD) programs are customarily used to compute details of a flow field, such as velocity fields or species concentrations. Generally they are not used to determine the resulting conditions at a solid boundary such as wall shear stress or heat flux. However, determination of this information should be within the capability of a CFD code, as the code supposedly contains appropriate models for these wall conditions. Before such predictions from CFD analyses can be accepted, the credibility of the CFD codes upon which they are based must be established. This report details the progress made in constructing a CFD model to predict the heat transfer to the wall in a film cooled rocket nozzle. Specifically, the objective of this work is to use the NASA code FDNS to predict the heat transfer which will occur during the upcoming hot-firing of the Pratt & Whitney 40K subscale nozzle (1Q93). Toward this end, an M = 3 wall jet is considered, and the resulting heat transfer to the wall is computed. The values are compared against experimental data available in Reference 1. Also, FDNS's ability to compute heat flux in a reacting flow will be determined by comparing the code's predictions against calorimeter data from the hot firing of a 40K combustor. The process of modeling the flow of combusting gases through the Pratt & Whitney 40K subscale combustor and nozzle is outlined. What follows in this report is a brief description of the FDNS code, with special emphasis on how it handles solid wall boundary conditions. The test cases and some FDNS solution are presented next, along with comparison to experimental data. The process of modeling the flow through a chamber and a nozzle using the FDNS code will also be outlined.
NASA Rotor 37 CFD Code Validation: Glenn-HT Code
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2010-01-01
In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Osery, I.A.
1983-12-01
Modelling studies of metal hydride hydrogen storage beds is a part of an extensive R and D program conducted in Egypt on hydrogen energy. In this context two computer programs; namely RET and RET1; have been developed. In RET computer program, a cylindrical conduction bed model is considered and an approximate analytical solution is used for solution of the associated mass and heat transfer problem. This problem is solved in RET1 computer program numerically allowing more flexibility in operating conditions but still limited to cylindrical configuration with only two alternatives for heat exchange; either fluid is passing through tubes imbeddedmore » in the solid alloy matrix or solid rods are surrounded by annular fluid tubes. The present computer code TOBA is more flexible and realistic. It performs the mass and heat transfer dynamic analysis of metal hydride storage beds using a variety of geometrical and operating alternatives.« less
Computer Code For Turbocompounded Adiabatic Diesel Engine
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Heywood, J. B.
1988-01-01
Computer simulation developed to study advantages of increased exhaust enthalpy in adiabatic turbocompounded diesel engine. Subsytems of conceptual engine include compressor, reciprocator, turbocharger turbine, compounded turbine, ducting, and heat exchangers. Focus of simulation of total system is to define transfers of mass and energy, including release and transfer of heat and transfer of work in each subsystem, and relationship among subsystems. Written in FORTRAN IV.
Probabilistic Structural Analysis Methods (PSAM) for select space propulsion systems components
NASA Technical Reports Server (NTRS)
1991-01-01
Summarized here is the technical effort and computer code developed during the five year duration of the program for probabilistic structural analysis methods. The summary includes a brief description of the computer code manuals and a detailed description of code validation demonstration cases for random vibrations of a discharge duct, probabilistic material nonlinearities of a liquid oxygen post, and probabilistic buckling of a transfer tube liner.
Experimental and computational surface and flow-field results for an all-body hypersonic aircraft
NASA Technical Reports Server (NTRS)
Lockman, William K.; Lawrence, Scott L.; Cleary, Joseph W.
1990-01-01
The objective of the present investigation is to establish a benchmark experimental data base for a generic hypersonic vehicle shape for validation and/or calibration of advanced computational fluid dynamics computer codes. This paper includes results from the comprehensive test program conducted in the NASA/Ames 3.5-foot Hypersonic Wind Tunnel for a generic all-body hypersonic aircraft model. Experimental and computational results on flow visualization, surface pressures, surface convective heat transfer, and pitot-pressure flow-field surveys are presented. Comparisons of the experimental results with computational results from an upwind parabolized Navier-Stokes code developed at Ames demonstrate the capabilities of this code.
Davidson, R W
1985-01-01
The increasing need to communicate to exchange data can be handled by personal microcomputers. The necessity for the transference of information stored in one type of personal computer to another type of personal computer is often encountered in the process of integrating multiple sources of information stored in different and incompatible computers in Medical Research and Practice. A practical example is demonstrated with two relatively inexpensive commonly used computers, the IBM PC jr. and the Apple IIe. The basic input/output (I/O) interface chip for serial communication for each computer are joined together using a Null connector and cable to form a communications link. Using BASIC (Beginner's All-purpose Symbolic Instruction Code) Computer Language and the Disk Operating System (DOS) the communications handshaking protocol and file transfer is established between the two computers. The BASIC programming languages used are Applesoft (Apple Personal Computer) and PC BASIC (IBM Personal computer).
NASA Technical Reports Server (NTRS)
Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.
A Rocket Engine Design Expert System
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1989-01-01
The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.
NASA Astrophysics Data System (ADS)
Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.
2016-10-01
Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to themore » Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.« less
Nonperturbative methods in HZE ion transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.
Code for Multiblock CFD and Heat-Transfer Computations
NASA Technical Reports Server (NTRS)
Fabian, John C.; Heidmann, James D.; Lucci, Barbara L.; Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur
2006-01-01
The NASA Glenn Research Center General Multi-Block Navier-Stokes Convective Heat Transfer Code, Glenn-HT, has been used extensively to predict heat transfer and fluid flow for a variety of steady gas turbine engine problems. Recently, the Glenn-HT code has been completely rewritten in Fortran 90/95, a more object-oriented language that allows programmers to create code that is more modular and makes more efficient use of data structures. The new implementation takes full advantage of the capabilities of the Fortran 90/95 programming language. As a result, the Glenn-HT code now provides dynamic memory allocation, modular design, and unsteady flow capability. This allows for the heat-transfer analysis of a full turbine stage. The code has been demonstrated for an unsteady inflow condition, and gridding efforts have been initiated for a full turbine stage unsteady calculation. This analysis will be the first to simultaneously include the effects of rotation, blade interaction, film cooling, and tip clearance with recessed tip on turbine heat transfer and cooling performance. Future plans call for the application of the new Glenn-HT code to a range of gas turbine engine problems of current interest to the heat-transfer community. The new unsteady flow capability will allow researchers to predict the effect of unsteady flow phenomena upon the convective heat transfer of turbine blades and vanes. Work will also continue on the development of conjugate heat-transfer capability in the code, where simultaneous solution of convective and conductive heat-transfer domains is accomplished. Finally, advanced turbulence and fluid flow models and automatic gridding techniques are being developed that will be applied to the Glenn-HT code and solution process.
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers
NASA Astrophysics Data System (ADS)
Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.
1992-01-01
Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.
Heat transfer in rocket engine combustion chambers and regeneratively cooled nozzles
NASA Technical Reports Server (NTRS)
1993-01-01
A conjugate heat transfer computational fluid dynamics (CFD) model to describe regenerative cooling in the main combustion chamber and nozzle and in the injector faceplate region for a launch vehicle class liquid rocket engine was developed. An injector model for sprays which treats the fluid as a variable density, single-phase media was formulated, incorporated into a version of the FDNS code, and used to simulate the injector flow typical of that in the Space Shuttle Main Engine (SSME). Various chamber related heat transfer analyses were made to verify the predictive capability of the conjugate heat transfer analysis provided by the FDNS code. The density based version of the FDNS code with the real fluid property models developed was successful in predicting the streamtube combustion of individual injector elements.
Computing NLTE Opacities -- Node Level Parallel Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Daniel
Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1979-01-01
A computer program which can distinguish between different receiver designs, and predict transient performance under variable solar flux, or ambient temperatures, etc. has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. The methodology followed in solving the heat transfer problem is explained. A program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver are included.
1974-12-01
as a series of sections, eacN represent- ing one pressure and each preceding the corresponding pressure group of the sur- face thermochemistry deck...groups together make up the surface thermochemistry deck. Within each pressure group the transfer coefficient values will be ordered. Within each transfer...values in each pressure group may not exceed 5 but may be only 1. If no kinetics effects are to be considered a transfer coefficient of zero is acceptable
WINCLR: a Computer Code for Heat Transfer and Clearance Calculation in a Compressor
NASA Technical Reports Server (NTRS)
Bose, T. K.; Murthy, S. N. B.
1994-01-01
One of the concerns during inclement weather operation of aircraft in rain and hail storm conditions is the nature and extent of changes in compressor casing clearance. An increase in clearance affects efficiency while a decrease may cause blade rubbing with the casing. The change in clearance is the result of geometrical dimensional changes in the blades, the casing and the rotor due to heat transfer between those parts and the two-phase working fluid. The heat transfer interacts nonlinearly with the performance of the compressor, and, therefore, the determination of clearance changes necessitates a simultaneous determination of change in performance of the compressor. A computer code the WINCLR has been designed for the determination of casing clearance, that is operated interactively with the PURDU-WINCOF I code designed previously for determining the performance of a compressor. A detailed description of the WINCLR code is provided in a companion report. The current report provides details of the code with an illustrative example of application to the case of a multistage compressor. It is found in the example case that under given ingestion and operational conditions, it is possible for a compressor to undergo changes in performance in the front stages and rubbing in the back stages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru
We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
Sensor Authentication: Embedded Processor Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svoboda, John
2012-09-25
Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048 point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Scullin, V. J.
1984-01-01
A general chemical kinetics code is described for complex, homogeneous ideal gas reactions in any chemical system. The main features of the GCKP84 code are flexibility, convenience, and speed of computation for many different reaction conditions. The code, which replaces the GCKP code published previously, solves numerically the differential equations for complex reaction in a batch system or one dimensional inviscid flow. It also solves numerically the nonlinear algebraic equations describing the well stirred reactor. A new state of the art numerical integration method is used for greatly increased speed in handling systems of stiff differential equations. The theory and the computer program, including details of input preparation and a guide to using the code are given.
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
Forward Monte Carlo Computations of Polarized Microwave Radiation
NASA Technical Reports Server (NTRS)
Battaglia, A.; Kummerow, C.
2000-01-01
Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.
15 CFR 740.7 - Computers (APP).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4A003. (2) Technology and software. License Exception APP authorizes exports of technology and software... programmability. (ii) Technology and source code. Technology and source code eligible for License Exception APP..., reexports and transfers (in-country) for nuclear, chemical, biological, or missile end-users and end-uses...
Thermal finite-element analysis of space shuttle main engine turbine blade
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Tong, Michael T.; Kaufman, Albert
1987-01-01
Finite-element, transient heat transfer analyses were performed for the first-stage blades of the space shuttle main engine (SSME) high-pressure fuel turbopump. The analyses were based on test engine data provided by Rocketdyne. Heat transfer coefficients were predicted by performing a boundary-layer analysis at steady-state conditions with the STAN5 boundary-layer code. Two different peak-temperature overshoots were evaluated for the startup transient. Cutoff transient conditions were also analyzed. A reduced gas temperature profile based on actual thermocouple data was also considered. Transient heat transfer analyses were conducted with the MARC finite-element computer code.
Mass transfer effects in a gasification riser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breault, Ronald W.; Li, Tingwen; Nicoletti, Phillip
2013-07-01
In the development of multiphase reacting computational fluid dynamics (CFD) codes, a number of simplifications were incorporated into the codes and models. One of these simplifications was the use of a simplistic mass transfer correlation for the faster reactions and omission of mass transfer effects completely on the moderate speed and slow speed reactions such as those in a fluidized bed gasifier. Another problem that has propagated is that the mass transfer correlation used in the codes is not universal and is being used far from its developed bubbling fluidized bed regime when applied to circulating fluidized bed (CFB) risermore » reactors. These problems are true for the major CFD codes. To alleviate this problem, a mechanistic based mass transfer coefficient algorithm has been developed based upon an earlier work by Breault et al. This fundamental approach uses the local hydrodynamics to predict a local, time varying mass transfer coefficient. The predicted mass transfer coefficients and the corresponding Sherwood numbers agree well with literature data and are typically about an order of magnitude lower than the correlation noted above. The incorporation of the new mass transfer model gives the expected behavior for all the gasification reactions evaluated in the paper. At the expected and typical design values for the solid flow rate in a CFB riser gasifier an ANOVA analysis has shown the predictions from the new code to be significantly different from the original code predictions. The new algorithm should be used such that the conversions are not over predicted. Additionally, its behaviors with changes in solid flow rate are consistent with the changes in the hydrodynamics.« less
Computing Temperatures in Optically Thick Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Capuder, Lawrence F.. Jr.
2011-01-01
We worked with a Monte Carlo radiative transfer code to simulate the transfer of energy through protoplanetary disks, where planet formation occurs. The code tracks photons from the star into the disk, through scattering, absorption and re-emission, until they escape to infinity. High optical depths in the disk interior dominate the computation time because it takes the photon packet many interactions to get out of the region. High optical depths also receive few photons and therefore do not have well-estimated temperatures. We applied a modified random walk (MRW) approximation for treating high optical depths and to speed up the Monte Carlo calculations. The MRW is implemented by calculating the average number of interactions the photon packet will undergo in diffusing within a single cell of the spatial grid and then updating the packet position, packet frequencies, and local radiation absorption rate appropriately. The MRW approximation was then tested for accuracy and speed compared to the original code. We determined that MRW provides accurate answers to Monte Carlo Radiative transfer simulations. The speed gained from using MRW is shown to be proportional to the disk mass.
Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations
NASA Astrophysics Data System (ADS)
Tritsis, A.; Yorke, H.; Tassis, K.
2018-05-01
We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.
GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heymann, Frank; Siebenmorgen, Ralf, E-mail: fheymann@pa.uky.edu
2012-05-20
A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting proceduremore » and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 {mu}m silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.« less
CFD Based Computations of Flexible Helicopter Blades for Stability Analysis
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2011-01-01
As a collaborative effort among government aerospace research laboratories an advanced version of a widely used computational fluid dynamics code, OVERFLOW, was recently released. This latest version includes additions to model flexible rotating multiple blades. In this paper, the OVERFLOW code is applied to improve the accuracy of airload computations from the linear lifting line theory that uses displacements from beam model. Data transfers required at every revolution are managed through a Unix based script that runs jobs on large super-cluster computers. Results are demonstrated for the 4-bladed UH-60A helicopter. Deviations of computed data from flight data are evaluated. Fourier analysis post-processing that is suitable for aeroelastic stability computations are performed.
FREQ: A computational package for multivariable system loop-shaping procedures
NASA Technical Reports Server (NTRS)
Giesy, Daniel P.; Armstrong, Ernest S.
1989-01-01
Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.
Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres
NASA Astrophysics Data System (ADS)
Liu, Quanhua; Weng, Fuzhong
2006-12-01
The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.
PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions
NASA Technical Reports Server (NTRS)
Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark
1990-01-01
Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.
Enhancement of the CAVE computer code
NASA Astrophysics Data System (ADS)
Rathjen, K. A.; Burk, H. O.
1983-12-01
The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.
ERIC Educational Resources Information Center
Mayer, Richard E.; Sims, Valerie K.
1994-01-01
In 2 experiments, 162 high- and low-spatial ability students viewed a computer-generated animation and heard a concurrent or successive explanation. The concurrent group generated more creative solutions to transfer problems and demonstrated a contiguity effect consistent with dual-coding theory. (SLD)
A generic archive protocol and an implementation
NASA Technical Reports Server (NTRS)
Jordan, J. M.; Jennings, D. G.; Mcglynn, T. A.; Ruggiero, N. G.; Serlemitsos, T. A.
1992-01-01
Archiving vast amounts of data has become a major part of every scientific space mission today. The Generic Archive/Retrieval Services Protocol (GRASP) addresses the question of how to archive the data collected in an environment where the underlying hardware archives may be rapidly changing. GRASP is a device independent specification defining a set of functions for storing and retrieving data from an archive, as well as other support functions. GRASP is divided into two levels: the Transfer Interface and the Action Interface. The Transfer Interface is computer/archive independent code while the Action Interface contains code which is dedicated to each archive/computer addressed. Implementations of the GRASP specification are currently available for DECstations running Ultrix, Sparcstations running SunOS, and microVAX/VAXstation 3100's. The underlying archive is assumed to function as a standard Unix or VMS file system. The code, written in C, is a single suite of files. Preprocessing commands define the machine unique code sections in the device interface. The implementation was written, to the greatest extent possible, using only ANSI standard C functions.
Influence of temperature fluctuations on infrared limb radiance: a new simulation code
NASA Astrophysics Data System (ADS)
Rialland, Valérie; Chervet, Patrick
2006-08-01
Airborne infrared limb-viewing detectors may be used as surveillance sensors in order to detect dim military targets. These systems' performances are limited by the inhomogeneous background in the sensor field of view which impacts strongly on target detection probability. This background clutter, which results from small-scale fluctuations of temperature, density or pressure must therefore be analyzed and modeled. Few existing codes are able to model atmospheric structures and their impact on limb-observed radiance. SAMM-2 (SHARC-4 and MODTRAN4 Merged), the Air Force Research Laboratory (AFRL) background radiance code can be used to in order to predict the radiance fluctuation as a result of a normalized temperature fluctuation, as a function of the line-of-sight. Various realizations of cluttered backgrounds can then be computed, based on these transfer functions and on a stochastic temperature field. The existing SIG (SHARC Image Generator) code was designed to compute the cluttered background which would be observed from a space-based sensor. Unfortunately, this code was not able to compute accurate scenes as seen by an airborne sensor especially for lines-of-sight close to the horizon. Recently, we developed a new code called BRUTE3D and adapted to our configuration. This approach is based on a method originally developed in the SIG model. This BRUTE3D code makes use of a three-dimensional grid of temperature fluctuations and of the SAMM-2 transfer functions to synthesize an image of radiance fluctuations according to sensor characteristics. This paper details the working principles of the code and presents some output results. The effects of the small-scale temperature fluctuations on infrared limb radiance as seen by an airborne sensor are highlighted.
HIFiRE-1 Turbulent Shock Boundary Layer Interaction - Flight Data and Computations
NASA Technical Reports Server (NTRS)
Kimmel, Roger L.; Prabhu, Dinesh
2015-01-01
The Hypersonic International Flight Research Experimentation (HIFiRE) program is a hypersonic flight test program executed by the Air Force Research Laboratory (AFRL) and Australian Defence Science and Technology Organisation (DSTO). This flight contained a cylinder-flare induced shock boundary layer interaction (SBLI). Computations of the interaction were conducted for a number of times during the ascent. The DPLR code used for predictions was calibrated against ground test data prior to exercising the code at flight conditions. Generally, the computations predicted the upstream influence and interaction pressures very well. Plateau pressures on the cylinder were predicted well at all conditions. Although the experimental heat transfer showed a large amount of scatter, especially at low heating levels, the measured heat transfer agreed well with computations. The primary discrepancy between the experiment and computation occurred in the pressures measured on the flare during second stage burn. Measured pressures exhibited large overshoots late in the second stage burn, the mechanism of which is unknown. The good agreement between flight measurements and CFD helps validate the philosophy of calibrating CFD against ground test, prior to exercising it at flight conditions.
Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.
Tauber, J; Lahav, M
1987-11-01
A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.
HO-CHUNK: Radiation Transfer code
NASA Astrophysics Data System (ADS)
Whitney, Barbara A.; Wood, Kenneth; Bjorkman, J. E.; Cohen, Martin; Wolff, Michael J.
2017-11-01
HO-CHUNK calculates radiative equilibrium temperature solution, thermal and PAH/vsg emission, scattering and polarization in protostellar geometries. It is useful for computing spectral energy distributions (SEDs), polarization spectra, and images.
The physics of volume rendering
NASA Astrophysics Data System (ADS)
Peters, Thomas
2014-11-01
Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics. Computer scientists use radiation transfer, among other things, for the visualization of complex data sets with direct volume rendering. In this article, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D. I show examples for the use of this module on analytical models and simulation data.
NASA Technical Reports Server (NTRS)
Sozen, Mehmet
2003-01-01
In what follows, the model used for combustion of liquid hydrogen (LH2) with liquid oxygen (LOX) using chemical equilibrium assumption, and the novel computational method developed for determining the equilibrium composition and temperature of the combustion products by application of the first and second laws of thermodynamics will be described. The modular FORTRAN code developed as a subroutine that can be incorporated into any flow network code with little effort has been successfully implemented in GFSSP as the preliminary runs indicate. The code provides capability of modeling the heat transfer rate to the coolants for parametric analysis in system design.
Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes
NASA Astrophysics Data System (ADS)
Schreier, Franz; Milz, Mathias; Buehler, Stefan A.; von Clarmann, Thomas
2018-05-01
An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric radiative transfer and remote sensing - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the 19 HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. The mutual differences of the equivalent brightness temperatures are presented and possible causes of disagreement are discussed. In particular, the impact of path integration schemes and atmospheric layer discretization is assessed. When the continuum absorption contribution is ignored because of the different implementations, residuals are generally in the sub-Kelvin range and smaller than 0.1 K for some window channels (and all atmospheric models and lbl codes). None of the three codes turned out to be perfect for all channels and atmospheres. Remaining discrepancies are attributed to different lbl optimization techniques. Lbl codes seem to have reached a maturity in the implementation of radiative transfer that the choice of the underlying physical models (line shape models, continua etc) becomes increasingly relevant.
NASA Technical Reports Server (NTRS)
Hollis, Brian R.
1995-01-01
A FORTRAN computer code for the reduction and analysis of experimental heat transfer data has been developed. This code can be utilized to determine heat transfer rates from surface temperature measurements made using either thin-film resistance gages or coaxial surface thermocouples. Both an analytical and a numerical finite-volume heat transfer model are implemented in this code. The analytical solution is based on a one-dimensional, semi-infinite wall thickness model with the approximation of constant substrate thermal properties, which is empirically corrected for the effects of variable thermal properties. The finite-volume solution is based on a one-dimensional, implicit discretization. The finite-volume model directly incorporates the effects of variable substrate thermal properties and does not require the semi-finite wall thickness approximation used in the analytical model. This model also includes the option of a multiple-layer substrate. Fast, accurate results can be obtained using either method. This code has been used to reduce several sets of aerodynamic heating data, of which samples are included in this report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
PIES free boundary stellarator equilibria with improved initial conditions
NASA Astrophysics Data System (ADS)
Drevlak, M.; Monticello, D.; Reiman, A.
2005-07-01
The MFBE procedure developed by Strumberger (1997 Nucl. Fusion 37 19) is used to provide an improved starting point for free boundary equilibrium computations in the case of W7-X (Nührenberg and Zille 1986 Phys. Lett. A 114 129) using the Princeton iterative equilibrium solver (PIES) code (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157). Transferring the consistent field found by the variational moments equilibrium code (VMEC) (Hirshmann and Whitson 1983 Phys. Fluids 26 3553) to an extended coordinate system using the VMORPH code, a safe margin between plasma boundary and PIES domain is established. The new EXTENDER_P code implements a generalization of the virtual casing principle, which allows field extension both for VMEC and PIES equilibria. This facilitates analysis of the 5/5 islands of the W7-X standard case without including them in the original PIES computation.
NASA Technical Reports Server (NTRS)
Rathjen, K. A.; Burk, H. O.
1983-01-01
The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.
SPAMCART: a code for smoothed particle Monte Carlo radiative transfer
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2016-10-01
We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.
2007-05-01
35 5 Actinide product radionuclides... actinides , and fission products in fallout. Doses from low-linear energy transfer (LET) radiation (beta particles and gamma rays) are reported separately...assumptions about the critical parameters used in calculating internal doses – resuspension factor, breathing rate, fractionation, and scenario elements – to
Advanced Computational Methods for Thermal Radiative Heat Transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.
2016-10-01
Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weaponmore » resp onse in fire environments.« less
A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere
NASA Astrophysics Data System (ADS)
Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael
2017-05-01
We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.
Survey of computer programs for heat transfer analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1986-01-01
An overview is given of the current capabilities of thirty-three computer programs that are used to solve heat transfer problems. The programs considered range from large general-purpose codes with broad spectrum of capabilities, large user community, and comprehensive user support (e.g., ABAQUS, ANSYS, EAL, MARC, MITAS II, MSC/NASTRAN, and SAMCEF) to the small, special-purpose codes with limited user community such as ANDES, NTEMP, TAC2D, TAC3D, TEPSA and TRUMP. The majority of the programs use either finite elements or finite differences for the spatial discretization. The capabilities of the programs are listed in tabular form followed by a summary of the major features of each program. The information presented herein is based on a questionnaire sent to the developers of each program. This information is preceded by a brief background material needed for effective evaluation and use of computer programs for heat transfer analysis. The present survey is useful in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program.
NASA Astrophysics Data System (ADS)
Ishii, Ayako; Ohnishi, Naofumi; Nagakura, Hiroki; Ito, Hirotaka; Yamada, Shoichi
2017-11-01
We developed a three-dimensional radiative transfer code for an ultra-relativistic background flow-field by using the Monte Carlo (MC) method in the context of gamma-ray burst (GRB) emission. For obtaining reliable simulation results in the coupled computation of MC radiation transport with relativistic hydrodynamics which can reproduce GRB emission, we validated radiative transfer computation in the ultra-relativistic regime and assessed the appropriate simulation conditions. The radiative transfer code was validated through two test calculations: (1) computing in different inertial frames and (2) computing in flow-fields with discontinuous and smeared shock fronts. The simulation results of the angular distribution and spectrum were compared among three different inertial frames and in good agreement with each other. If the time duration for updating the flow-field was sufficiently small to resolve a mean free path of a photon into ten steps, the results were thoroughly converged. The spectrum computed in the flow-field with a discontinuous shock front obeyed a power-law in frequency whose index was positive in the range from 1 to 10 MeV. The number of photons in the high-energy side decreased with the smeared shock front because the photons were less scattered immediately behind the shock wave due to the small electron number density. The large optical depth near the shock front was needed for obtaining high-energy photons through bulk Compton scattering. Even one-dimensional structure of the shock wave could affect the results of radiation transport computation. Although we examined the effect of the shock structure on the emitted spectrum with a large number of cells, it is hard to employ so many computational cells per dimension in multi-dimensional simulations. Therefore, a further investigation with a smaller number of cells is required for obtaining realistic high-energy photons with multi-dimensional computations.
The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer
NASA Astrophysics Data System (ADS)
Kochoska, Angela; Prša, Andrej; Horvat, Martin
2018-01-01
Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and can be used with any existing or new model of the structure of contact binaries. We present results on several test objects and future prospects of the implementation in state-of-the-art binary star modeling software.
TEMPEST: A computer code for three-dimensional analysis of transient fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fort, J.A.
TEMPEST (Transient Energy Momentum and Pressure Equations Solutions in Three dimensions) is a powerful tool for solving engineering problems in nuclear energy, waste processing, chemical processing, and environmental restoration because it analyzes and illustrates 3-D time-dependent computational fluid dynamics and heat transfer analysis. It is a family of codes with two primary versions, a N- Version (available to public) and a T-Version (not currently available to public). This handout discusses its capabilities, applications, numerical algorithms, development status, and availability and assistance.
Thermodynamic equilibrium-air correlations for flowfield applications
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.
1981-01-01
Equilibrium-air thermodynamic correlations have been developed for flowfield calculation procedures. A comparison between the postshock results computed by the correlation equations and detailed chemistry calculations is very good. The thermodynamic correlations are incorporated in an approximate inviscid flowfield code with a convective heating capability for the purpose of defining the thermodynamic environment through the shock layer. Comparisons of heating rates computed by the approximate code and a viscous-shock-layer method are good. In addition to presenting the thermodynamic correlations, the impact of several viscosity models on the convective heat transfer is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, T.H.; Domanus, H.M.; Sha, W.T.
1993-02-01
The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added featuremore » is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User's Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.« less
Survey of computer programs for heat transfer analysis
NASA Astrophysics Data System (ADS)
Noor, A. K.
An overview is presented of the current capabilities of thirty-eight computer programs that can be used for solution of heat transfer problems. These programs range from the large, general-purpose codes with a broad spectrum of capabilities, large user community and comprehensive user support (e.g., ANSYS, MARC, MITAS 2 MSC/NASTRAN, SESAM-69/NV-615) to the small, special purpose codes with limited user community such as ANDES, NNTB, SAHARA, SSPTA, TACO, TEPSA AND TRUMP. The capabilities of the programs surveyed are listed in tabular form followed by a summary of the major features of each program. As with any survey of computer programs, the present one has the following limitations: (1) It is useful only in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program; (2) Since computer software continually changes, often at a rapid rate, some means must be found for updating this survey and maintaining some degree of currency.
Survey of computer programs for heat transfer analysis
NASA Technical Reports Server (NTRS)
Noor, A. K.
1982-01-01
An overview is presented of the current capabilities of thirty-eight computer programs that can be used for solution of heat transfer problems. These programs range from the large, general-purpose codes with a broad spectrum of capabilities, large user community and comprehensive user support (e.g., ANSYS, MARC, MITAS 2 MSC/NASTRAN, SESAM-69/NV-615) to the small, special purpose codes with limited user community such as ANDES, NNTB, SAHARA, SSPTA, TACO, TEPSA AND TRUMP. The capabilities of the programs surveyed are listed in tabular form followed by a summary of the major features of each program. As with any survey of computer programs, the present one has the following limitations: (1) It is useful only in the initial selection of the programs which are most suitable for a particular application. The final selection of the program to be used should, however, be based on a detailed examination of the documentation and the literature about the program; (2) Since computer software continually changes, often at a rapid rate, some means must be found for updating this survey and maintaining some degree of currency.
Thermal Transfer Compared To The Fourteen Other Imaging Technologies
NASA Astrophysics Data System (ADS)
O'Leary, John W.
1989-07-01
A quiet revolution in the world of imaging has been underway for the past few years. The older technologies of dot matrix, daisy wheel, thermal paper and pen plotters have been increasingly displaced by laser, ink jet and thermal transfer. The net result of this revolution is improved technologies that afford superior imaging, quiet operation, plain paper usage, instant operation, and solid state components. Thermal transfer is one of the processes that incorporates these benefits. Among the imaging application for thermal transfer are: 1. Bar code labeling and scanning. 2. New systems for airline ticketing, boarding passes, reservations, etc. 3. Color computer graphics and imaging. 4. Copying machines that copy in color. 5. Fast growing communications media such as facsimile. 6. Low cost word processors and computer printers. 7. New devices that print pictures from video cameras or television sets. 8. Cameras utilizing computer chips in place of film.
DataPlus - a revolutionary applications generator for DOS hand-held computers
David Dean; Linda Dean
2000-01-01
DataPlus allows the user to easily design data collection templates for DOS-based hand-held computers that mimic clipboard data sheets. The user designs and tests the application on the desktop PC and then transfers it to a DOS field computer. Other features include: error checking, missing data checks, and sensor input from RS-232 devices such as bar code wands,...
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Development of a GPU Compatible Version of the Fast Radiation Code RRTMG
NASA Astrophysics Data System (ADS)
Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.
2012-12-01
The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
Comparative Dosimetric Estimates of a 25 keV Electron Micro-beam with three Monte Carlo Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mainardi, Enrico; Donahue, Richard J.; Blakely, Eleanor A.
2002-09-11
The calculations presented compare the different performances of the three Monte Carlo codes PENELOPE-1999, MCNP-4C and PITS, for the evaluation of Dose profiles from a 25 keV electron micro-beam traversing individual cells. The overall model of a cell is a water cylinder equivalent for the three codes but with a different internal scoring geometry: hollow cylinders for PENELOPE and MCNP, whereas spheres are used for the PITS code. A cylindrical cell geometry with scoring volumes with the shape of hollow cylinders was initially selected for PENELOPE and MCNP because of its superior simulation of the actual shape and dimensions ofmore » a cell and for its improved computer-time efficiency if compared to spherical internal volumes. Some of the transfer points and energy transfer that constitute a radiation track may actually fall in the space between spheres, that would be outside the spherical scoring volume. This internal geometry, along with the PENELOPE algorithm, drastically reduced the computer time when using this code if comparing with event-by-event Monte Carlo codes like PITS. This preliminary work has been important to address dosimetric estimates at low electron energies. It demonstrates that codes like PENELOPE can be used for Dose evaluation, even with such small geometries and energies involved, which are far below the normal use for which the code was created. Further work (initiated in Summer 2002) is still needed however, to create a user-code for PENELOPE that allows uniform comparison of exact cell geometries, integral volumes and also microdosimetric scoring quantities, a field where track-structure codes like PITS, written for this purpose, are believed to be superior.« less
3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure
NASA Astrophysics Data System (ADS)
Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.
2003-04-01
Light propagation and scattering in terrestrial atmosphere is usually studied in the framework of the 1D radiative transfer theory [1]. However, in reality particles (e.g., ice crystals, solid and liquid aerosols, cloud droplets) are randomly distributed in 3D space. In particular, their concentrations vary both in vertical and horizontal directions. Therefore, 3D effects influence modern cloud and aerosol retrieval procedures, which are currently based on the 1D radiative transfer theory. It should be pointed out that the standard radiative transfer equation allows to study these more complex situations as well [2]. In recent year the parallel version of the 2D and 3D RADUGA code has been developed. This version is successfully used in gammas and neutrons transport problems [3]. Applications of this code to radiative transfer in atmosphere problems are contained in [4]. Possibilities of code RADUGA are presented in [5]. The RADUGA code system is an universal solver of radiative transfer problems for complicated models, including 2D and 3D aerosol and cloud fields with arbitrary scattering anisotropy, light absorption, inhomogeneous underlying surface and topography. Both delta type and distributed light sources can be accounted for in the framework of the algorithm developed. The accurate numerical procedure is based on the new discrete ordinate SWDD scheme [6]. The algorithm is specifically designed for parallel supercomputers. The version RADUGA 5.1(P) can run on MBC1000M [7] (768 processors with 10 Gb of hard disc memory for each processor). The peak productivity is equal 1 Tfl. Corresponding scalar version RADUGA 5.1 is working on PC. As a first example of application of the algorithm developed, we have studied the shadowing effects of clouds on neighboring cloudless atmosphere, depending on the cloud optical thickness, surface albedo, and illumination conditions. This is of importance for modern satellite aerosol retrieval algorithms development. [1] Sobolev, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru
Computer codes for thermal analysis of a solid rocket motor nozzle
NASA Technical Reports Server (NTRS)
Chauhan, Rajinder Singh
1988-01-01
A number of computer codes are available for performing thermal analysis of solid rocket motor nozzles. Aerotherm Chemical Equilibrium (ACE) computer program can be used to perform one-dimensional gas expansion to determine the state of the gas at each location of a nozzle. The ACE outputs can be used as input to a computer program called Momentum/Energy Integral Technique (MEIT) for predicting boundary layer development development, shear, and heating on the surface of the nozzle. The output from MEIT can be used as input to another computer program called Aerotherm Charring Material Thermal Response and Ablation Program (CMA). This program is used to calculate oblation or decomposition response of the nozzle material. A code called Failure Analysis Nonlinear Thermal and Structural Integrated Code (FANTASTIC) is also likely to be used for performing thermal analysis of solid rocket motor nozzles after the program is duly verified. A part of the verification work on FANTASTIC was done by using one and two dimension heat transfer examples with known answers. An attempt was made to prepare input for performing thermal analysis of the CCT nozzle using the FANTASTIC computer code. The CCT nozzle problem will first be solved by using ACE, MEIT, and CMA. The same problem will then be solved using FANTASTIC. These results will then be compared for verification of FANTASTIC.
A Sequential Fluid-mechanic Chemical-kinetic Model of Propane HCCI Combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aceves, S M; Flowers, D L; Martinez-Frias, J
2000-11-29
We have developed a methodology for predicting combustion and emissions in a Homogeneous Charge Compression Ignition (HCCI) Engine. This methodology combines a detailed fluid mechanics code with a detailed chemical kinetics code. Instead of directly linking the two codes, which would require an extremely long computational time, the methodology consists of first running the fluid mechanics code to obtain temperature profiles as a function of time. These temperature profiles are then used as input to a multi-zone chemical kinetics code. The advantage of this procedure is that a small number of zones (10) is enough to obtain accurate results. Thismore » procedure achieves the benefits of linking the fluid mechanics and the chemical kinetics codes with a great reduction in the computational effort, to a level that can be handled with current computers. The success of this procedure is in large part a consequence of the fact that for much of the compression stroke the chemistry is inactive and thus has little influence on fluid mechanics and heat transfer. Then, when chemistry is active, combustion is rather sudden, leaving little time for interaction between chemistry and fluid mixing and heat transfer. This sequential methodology has been capable of explaining the main characteristics of HCCI combustion that have been observed in experiments. In this paper, we use our model to explore an HCCI engine running on propane. The paper compares experimental and numerical pressure traces, heat release rates, and hydrocarbon and carbon monoxide emissions. The results show an excellent agreement, even in parameters that are difficult to predict, such as chemical heat release rates. Carbon monoxide emissions are reasonably well predicted, even though it is intrinsically difficult to make good predictions of CO emissions in HCCI engines. The paper includes a sensitivity study on the effect of the heat transfer correlation on the results of the analysis. Importantly, the paper also shows a numerical study on how parameters such as swirl rate, crevices and ceramic walls could help in reducing HC and CO emissions from HCCI engines.« less
A method of non-contact reading code based on computer vision
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan
2018-03-01
With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.
28-Bit serial word simulator/monitor
NASA Technical Reports Server (NTRS)
Durbin, J. W.
1979-01-01
Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.
Computational analysis of Variable Thrust Engine (VTE) performance
NASA Technical Reports Server (NTRS)
Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.
1993-01-01
The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.
Numerical investigation of heat transfer on film-cooled turbine blades.
Ginibre, P; Lefebvre, M; Liamis, N
2001-05-01
The accurate heat transfer prediction of film-cooled blades is a key issue for the aerothermal turbine design. For this purpose, advanced numerical methods have been developed at Snecma Moteurs. The goal of this paper is the assessment of a three-dimensional Navier-Stokes solver, based on the ONERA CANARI-COMET code, devoted to the steady aerothermal computations of film-cooled blades. The code uses a multidomain approach to discretize the blade to blade channel with overlapping structured meshes for the injection holes. The turbulence closure is done by means of either Michel mixing length model or Spalart-Allmaras one transport equation model. Computations of thin 3D slices of three film-cooled nozzle guide vane blades with multiple injections are performed. Aerothermal predictions are compared to experiments carried out by the von Karman Institute. The behavior of the turbulence models is discussed, and velocity and temperature injection profiles are investigated.
NASA Technical Reports Server (NTRS)
Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
TOPAZ2D heat transfer code users manual and thermal property data base
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.B.; Edwards, A.L.
1990-05-01
TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependentmore » boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, T.H.; Domanus, H.M.; Sha, W.T.
1993-02-01
The COMMIX-PPC computer pregrain is an extended and improved version of earlier COMMIX codes and is specifically designed for evaluating the thermal performance of power plant condensers. The COMMIX codes are general-purpose computer programs for the analysis of fluid flow and heat transfer in complex Industrial systems. In COMMIX-PPC, two major features have been added to previously published COMMIX codes. One feature is the incorporation of one-dimensional equations of conservation of mass, momentum, and energy on the tube stile and the proper accounting for the thermal interaction between shell and tube side through the porous-medium approach. The other added featuremore » is the extension of the three-dimensional conservation equations for shell-side flow to treat the flow of a multicomponent medium. COMMIX-PPC is designed to perform steady-state and transient. Three-dimensional analysis of fluid flow with heat transfer tn a power plant condenser. However, the code is designed in a generalized fashion so that, with some modification, it can be used to analyze processes in any heat exchanger or other single-phase engineering applications. Volume I (Equations and Numerics) of this report describes in detail the basic equations, formulation, solution procedures, and models for a phenomena. Volume II (User`s Guide and Manual) contains the input instruction, flow charts, sample problems, and descriptions of available options and boundary conditions.« less
1984-06-01
preceding the corresponding pressure group of the surface thermochemistry deck as described below. The temperature entries within each section must be... pressure group the transfer coefficient values will be ordered. Within each transfer coefficient section, ablation rate entries need not he ordered in any...may not exceed 5 (and may be only I); the number of transfer coefficient values in each pressure group may not exceed 5 but may be only 1. If no
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.
1991-01-01
A numerical code is developed for computing three-dimensional, turbulent, compressible flow within coolant passages of turbine blades. The code is based on a formulation of the compressible Navier-Stokes equations in a rotating frame of reference in which the velocity dependent variable is specified with respect to the rotating frame instead of the inertial frame. The algorithm employed to obtain solutions to the governing equation is a finite-volume LU algorithm that allows convection, source, as well as diffusion terms to be treated implicitly. In this study, all convection terms are upwind differenced by using flux-vector splitting, and all diffusion terms are centrally differenced. This paper describes the formulation and algorithm employed in the code. Some computed solutions for the flow within a coolant passage of a radial turbine are also presented.
A rocket engine design expert system
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1989-01-01
The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state-of-the-art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the hydrogen-oxygen coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One-dimensional equilibrium chemistry was employed in the energy release analysis of the combustion chamber and three-dimensional finite-difference analysis of the regenerative cooling channels was used to calculate the pressure drop along the channels and the coolant temperature as it exits the coolant circuit. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.
Prediction of Unshsrouded Rotor Blade Tip Heat Transfer
NASA Technical Reports Server (NTRS)
Ameri, A. A.; Steinthorsson, E.
1994-01-01
The rate of heat transfer on the tip of a turbine rotor blade and on the blade surface in the vicinity of the tip, was successfully predicted. The computations were performed with a multiblock computer code which solves the Reynolds Averaged Navier-Stokes equations using an efficient multigrid method. The case considered for the present calculations was the Space Shuttle Main Engine (SSME) high pressure fuel side turbine. The predictions of the blade tip heat transfer agreed reasonably well with the experimental measurements using the present level of grid refinement. On the tip surface, regions with high rate of heat transfer was found to exist close to the pressure side and suction side edges. Enhancement of the heat transfer was also observed on the blade surface near the tip. Further comparison of the predictions was performed with results obtained from correlations based on fully developed channel flow.
Numerical investigation of roughness effects in aircraft icing calculations
NASA Astrophysics Data System (ADS)
Matheis, Brian Daniel
2008-10-01
Icing codes are playing a role of increasing significance in the design and certification of ice protected aircraft surfaces. However, in the interest of computational efficiency certain small scale physics of the icing problem are grossly approximated by the codes. One such small scale phenomena is the effect of ice roughness on the development of the surface water film and on the convective heat transfer. This study uses computational methods to study the potential effect of ice roughness on both of these small scale phenomena. First, a two-dimensional condensed layer code is used to examine the effect of roughness on surface water development. It is found that the Couette approximation within the film breaks down as the wall shear goes to zero, depending on the film thickness. Roughness elements with initial flow separation in the air induce flow separation in the water layer at steady state, causing a trapping of the film. The amount of trapping for different roughness configurations is examined. Second, a three-dimensional incompressible Navier-Stokes code is developed to examine large scale ice roughness on the leading edge. The effect on the convective heat transfer and potential effect on the surface water dynamics is examined for a number of distributed roughness parameters including Reynolds number, roughness height, streamwise extent, roughness spacing and roughness shape. In most cases the roughness field increases the net average convective heat transfer on the leading edge while narrowing surface shear lines, indicating a choking of the surface water flow. Both effects show significant variation on the scale of the ice roughness. Both the change in heat transfer as well as the potential change in surface water dynamics are presented in terms of the development of singularities in the surface shear pattern. Of particular interest is the effect of the smooth zone upstream of the roughness which shows both a relatively large increase in convective heat transfer as well as excessive choking of the surface shear lines at the upstream end of the roughness field. A summary of the heat transfer results is presented for both the averaged heat transfer as well as the maximum heat transfer over each roughness element, indicating that the roughness Reynolds number is the primary parameter which characterizes the behavior of the roughness for the problem of interest.
AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE
NASA Technical Reports Server (NTRS)
Liever, P. A.; Sheta, E. F.; Habchi, S. D.
2006-01-01
A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.
NASA Technical Reports Server (NTRS)
Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.
2006-01-01
The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.
Analysis of Material Sample Heated by Impinging Hot Hydrogen Jet in a Non-Nuclear Tester
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Foote, John; Litchford, Ron
2006-01-01
A computational conjugate heat transfer methodology was developed and anchored with data obtained from a hot-hydrogen jet heated, non-nuclear materials tester, as a first step towards developing an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective and thermal radiative, and conjugate heat transfers. Predicted hot hydrogen jet and material surface temperatures were compared with those of measurement. Predicted solid temperatures were compared with those obtained with a standard heat transfer code. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.
Turbine Blade and Endwall Heat Transfer Measured in NASA Glenn's Transonic Turbine Blade Cascade
NASA Technical Reports Server (NTRS)
Giel, Paul W.
2000-01-01
Higher operating temperatures increase the efficiency of aircraft gas turbine engines, but can also degrade internal components. High-pressure turbine blades just downstream of the combustor are particularly susceptible to overheating. Computational fluid dynamics (CFD) computer programs can predict the flow around the blades so that potential hot spots can be identified and appropriate cooling schemes can be designed. Various blade and cooling schemes can be examined computationally before any hardware is built, thus saving time and effort. Often though, the accuracy of these programs has been found to be inadequate for predicting heat transfer. Code and model developers need highly detailed aerodynamic and heat transfer data to validate and improve their analyses. The Transonic Turbine Blade Cascade was built at the NASA Glenn Research Center at Lewis Field to help satisfy the need for this type of data.
Heat Transfer on a Flat Plate with Uniform and Step Temperature Distributions
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2005-01-01
Heat transfer associated with turbulent flow on a step-heated or cooled section of a flat plate at zero angle of attack with an insulated starting section was computationally modeled using the GASP Navier-Stokes code. The algebraic eddy viscosity model of Baldwin-Lomax and the turbulent two-equation models, the K- model and the Shear Stress Turbulent model (SST), were employed. The variations from uniformity of the imposed experimental temperature profile were incorporated in the computations. The computations yielded satisfactory agreement with the experimental results for all three models. The Baldwin- Lomax model showed the closest agreement in heat transfer, whereas the SST model was higher and the K-omega model was yet higher than the experiments. In addition to the step temperature distribution case, computations were also carried out for a uniformly heated or cooled plate. The SST model showed the closest agreement with the Von Karman analogy, whereas the K-omega model was higher and the Baldwin-Lomax was lower.
Maestro and Castro: Simulation Codes for Astrophysical Flows
NASA Astrophysics Data System (ADS)
Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun
2017-01-01
Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Peiyuan; Brown, Timothy; Fullmer, William D.
Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less
A Radiation Solver for the National Combustion Code
NASA Technical Reports Server (NTRS)
Sockol, Peter M.
2015-01-01
A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2010-01-01
In order to improve the understanding of particle vitiation effects in hypersonic propulsion test facilities, a quasi-one dimensional numerical tool was developed to efficiently model reacting particle-gas flows over a wide range of conditions. Features of this code include gas-phase finite-rate kinetics, a global porous-particle combustion model, mass, momentum and energy interactions between phases, and subsonic and supersonic particle drag and heat transfer models. The basic capabilities of this tool were validated against available data or other validated codes. To demonstrate the capabilities of the code a series of computations were performed for a model hypersonic propulsion test facility and scramjet. Parameters studied were simulated flight Mach number, particle size, particle mass fraction and particle material.
Light scattering by planetary-regolith analog samples: computational results
NASA Astrophysics Data System (ADS)
Väisänen, Timo; Markkanen, Johannes; Hadamcik, Edith; Levasseur-Regourd, Anny-Chantal; Lasue, Jeremie; Blum, Jürgen; Penttilä, Antti; Muinonen, Karri
2017-04-01
We compute light scattering by a planetary-regolith analog surface. The corresponding experimental work is from Hadamcik et al. [1] with the PROGRA2-surf [2] device measuring the polarization of dust particles. The analog samples are low density (volume fraction 0.15 ± 0.03) agglomerates produced by random ballistic deposition of almost equisized silica spheres (refractive index n=1.5 and diameter 1.45 ± 0.06 µm). Computations are carried out with the recently developed codes entitled Radiative Transfer with Reciprocal Transactions (R2T2) and Radiative Transfer Coherent Backscattering with incoherent interactions (RT-CB-ic). Both codes incorporate the so-called incoherent treatment which enhances the applicability of the radiative transfer as shown by Muinonen et al. [3]. As a preliminary result, we have computed scattering from a large spherical medium with the RT-CB-ic using equal-sized particles with diameters of 1.45 microns. The preliminary results have shown that the qualitative characteristics are similar for the computed and measured intensity and polarization curves but that there are still deviations between the characteristics. We plan to remove the deviations by incorporating a size distribution of particles (1.45 ± 0.02 microns) and detailed information about the volume density profile within the analog surface. Acknowledgments: We acknowledge the ERC Advanced Grant no. 320773 entitled Scattering and Absorption of Electromagnetic Waves in Particulate Media (SAEMPL). Computational resources were provided by CSC - IT Centre for Science Ltd, Finland. References: [1] Hadamcik E. et al. (2007), JQSRT, 106, 74-89 [2] Levasseur-Regourd A.C. et al. (2015), Polarimetry of stars and planetary systems, CUP, 61-80 [3] Muinonen K. et al. (2016), extended abstract for EMTS.
1985-03-01
interferometry and computer- R - spanwise coordinate, ft assisted tomography ( CAT ) are used to determine the transonic velocity field of a model rotor...and extracting fringe-order functions, the c data are transferred to a CAT code.- The CAT code Ui transmitted wave complex amplitude then calculates...the perturbation velocity in sev- eral planes above the blade surface. The values Ur reference wave complex amplitude from the holography- CAT method
Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.
NASA Technical Reports Server (NTRS)
Agarwal, R.; Rakich, J. V.
1978-01-01
Computational results obtained with a parabolic Navier-Stokes marching code are presented for supersonic viscous flow past a pointed cone at angle of attack undergoing a combined spinning and coning motion. The code takes into account the asymmetries in the flow field resulting from the motion and computes the asymmetric shock shape, crossflow and streamwise shear, heat transfer, crossflow separation and vortex structure. The side force and moment are also computed. Reasonably good agreement is obtained with the side force measurements of Schiff and Tobak. Comparison is also made with the only available numerical inviscid analysis. It is found that the asymmetric pressure loads due to coning motion are much larger than all other viscous forces due to spin and coning, making viscous forces negligible in the combined motion.
"SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres
NASA Astrophysics Data System (ADS)
Sapar, A.; Poolamäe, R.
2003-01-01
A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.
Transversal Clifford gates on folded surface codes
Moussa, Jonathan E.
2016-10-12
Surface and color codes are two forms of topological quantum error correction in two spatial dimensions with complementary properties. Surface codes have lower-depth error detection circuits and well-developed decoders to interpret and correct errors, while color codes have transversal Clifford gates and better code efficiency in the number of physical qubits needed to achieve a given code distance. A formal equivalence exists between color codes and folded surface codes, but it does not guarantee the transferability of any of these favorable properties. However, the equivalence does imply the existence of constant-depth circuit implementations of logical Clifford gates on folded surfacemore » codes. We achieve and improve this result by constructing two families of folded surface codes with transversal Clifford gates. This construction is presented generally for qudits of any dimension. Lastly, the specific application of these codes to universal quantum computation based on qubit fusion is also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
Conjugate Compressible Fluid Flow and Heat Transfer in Ducts
NASA Technical Reports Server (NTRS)
Cross, M. F.
2011-01-01
A computational approach to modeling transient, compressible fluid flow with heat transfer in long, narrow ducts is presented. The primary application of the model is for analyzing fluid flow and heat transfer in solid propellant rocket motor nozzle joints during motor start-up, but the approach is relevant to a wide range of analyses involving rapid pressurization and filling of ducts. Fluid flow is modeled through solution of the spatially one-dimensional, transient Euler equations. Source terms are included in the governing equations to account for the effects of wall friction and heat transfer. The equation solver is fully-implicit, thus providing greater flexibility than an explicit solver. This approach allows for resolution of pressure wave effects on the flow as well as for fast calculation of the steady-state solution when a quasi-steady approach is sufficient. Solution of the one-dimensional Euler equations with source terms significantly reduces computational run times compared to general purpose computational fluid dynamics packages solving the Navier-Stokes equations with resolved boundary layers. In addition, conjugate heat transfer is more readily implemented using the approach described in this paper than with most general purpose computational fluid dynamics packages. The compressible flow code has been integrated with a transient heat transfer solver to analyze heat transfer between the fluid and surrounding structure. Conjugate fluid flow and heat transfer solutions are presented. The author is unaware of any previous work available in the open literature which uses the same approach described in this paper.
NASA Astrophysics Data System (ADS)
Ahamad, N. Ameer; Khan, T. M. Yunus
2018-05-01
The present study investigates the effect of radius ratio and Rayleigh number on beat transfer characteristics of an annular cone subjected to two side heating and one side cooling. Finite element method is used to convert the partial differential equations into algebraic equations. The resulting equations are solved with the help of in-house computer code developed for specific purpose of heat transfer in conical porous medium. The results are discussed with respect to the radius ratio and Rayleigh number.
Verification and benchmark testing of the NUFT computer code
NASA Astrophysics Data System (ADS)
Lee, K. H.; Nitao, J. J.; Kulshrestha, A.
1993-10-01
This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.
A unified radiative magnetohydrodynamics code for lightning-like discharge simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run
2014-03-15
A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less
Efficient Radiative Transfer for Dynamically Evolving Stratified Atmospheres
NASA Astrophysics Data System (ADS)
Judge, Philip G.
2017-12-01
We present a fast multi-level and multi-atom non-local thermodynamic equilibrium radiative transfer method for dynamically evolving stratified atmospheres, such as the solar atmosphere. The preconditioning method of Rybicki & Hummer (RH92) is adopted. But, pressed for the need of speed and stability, a “second-order escape probability” scheme is implemented within the framework of the RH92 method, in which frequency- and angle-integrals are carried out analytically. While minimizing the computational work needed, this comes at the expense of numerical accuracy. The iteration scheme is local, the formal solutions for the intensities are the only non-local component. At present the methods have been coded for vertical transport, applicable to atmospheres that are highly stratified. The probabilistic method seems adequately fast, stable, and sufficiently accurate for exploring dynamical interactions between the evolving MHD atmosphere and radiation using current computer hardware. Current 2D and 3D dynamics codes do not include this interaction as consistently as the current method does. The solutions generated may ultimately serve as initial conditions for dynamical calculations including full 3D radiative transfer. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
NASA Technical Reports Server (NTRS)
Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.
1974-01-01
This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.
Human operator identification model and related computer programs
NASA Technical Reports Server (NTRS)
Kessler, K. M.; Mohr, J. N.
1978-01-01
Four computer programs which provide computational assistance in the analysis of man/machine systems are reported. The programs are: (1) Modified Transfer Function Program (TF); (2) Time Varying Response Program (TVSR); (3) Optimal Simulation Program (TVOPT); and (4) Linear Identification Program (SCIDNT). The TV program converts the time domain state variable system representative to frequency domain transfer function system representation. The TVSR program computes time histories of the input/output responses of the human operator model. The TVOPT program is an optimal simulation program and is similar to TVSR in that it produces time histories of system states associated with an operator in the loop system. The differences between the two programs are presented. The SCIDNT program is an open loop identification code which operates on the simulated data from TVOPT (or TVSR) or real operator data from motion simulators.
NASA Technical Reports Server (NTRS)
Gabrielson, V. K.
1975-01-01
The computer program DVMESH and the use of the Tektronix DVST graphics terminal were described for applications of preparing mesh data for use in various two-dimensional axisymmetric finite element stress analysis and heat transfer codes.
Multidimensional Modeling of Atmospheric Effects and Surface Heterogeneities on Remote Sensing
NASA Technical Reports Server (NTRS)
Gerstl, S. A. W.; Simmer, C.; Zardecki, A. (Principal Investigator)
1985-01-01
The overall goal of this project is to establish a modeling capability that allows a quantitative determination of atmospheric effects on remote sensing including the effects of surface heterogeneities. This includes an improved understanding of aerosol and haze effects in connection with structural, angular, and spatial surface heterogeneities. One important objective of the research is the possible identification of intrinsic surface or canopy characteristics that might be invariant to atmospheric perturbations so that they could be used for scene identification. Conversely, an equally important objective is to find a correction algorithm for atmospheric effects in satellite-sensed surface reflectances. The technical approach is centered around a systematic model and code development effort based on existing, highly advanced computer codes that were originally developed for nuclear radiation shielding applications. Computational techniques for the numerical solution of the radiative transfer equation are adapted on the basis of the discrete-ordinates finite-element method which proved highly successful for one and two-dimensional radiative transfer problems with fully resolved angular representation of the radiation field.
Progress towards understanding and predicting convection heat transfer in the turbine gas path
NASA Technical Reports Server (NTRS)
Simoneau, Robert J.; Simon, Frederick F.
1992-01-01
A new era is drawing in the ability to predict convection heat transfer in the turbine gas path. We feel that the technical community now has the capability to mount a major assault on this problem, which has eluded significant progress for a long time. We hope to make a case for this bold statement by reviewing the state of the art in three major heat transfer, configuration-specific experiments, whose data have provided the big picture and guided both the fundamental modeling research and the code development. Following that, we review progress and directions in the development of computer codes to predict turbine gas path heat transfer. Finally, we cite examples and make observations on the more recent efforts to do all this work in a simultaneous, interactive, and more synergistic manner. We conclude with an assessment of progress, suggestions for how to use the current state of the art, and recommendations for the future.
NASA Technical Reports Server (NTRS)
Albert, Mary R.
2012-01-01
Dr. Albert's current research is centered on transfer processes in porous media, including air-snow exchange in the Polar Regions and in soils in temperate areas. Her research includes field measurements, laboratory experiments, and theoretical modeling. Mary conducts field and laboratory measurements of the physical properties of natural terrain surfaces, including permeability, microstructure, and thermal conductivity. Mary uses the measurements to examine the processes of diffusion and advection of heat, mass, and chemical transport through snow and other porous media. She has developed numerical models for investigation of a variety of problems, from interstitial transport to freezing of flowing liquids. These models include a two-dimensional finite element code for air flow with heat, water vapor, and chemical transport in porous media, several multidimensional codes for diffusive transfer, as well as a computational fluid dynamics code for analysis of turbulent water flow in moving-boundary phase change problems.
Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes
NASA Astrophysics Data System (ADS)
Schreier, F.; Garcia, S. Gimeno; Milz, M.; Kottayil, A.; Höpfner, M.; von Clarmann, T.; Stiller, G.
2013-05-01
An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric sounding - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. Results of this intercomparison and a discussion of reasons of the observed differences are presented.
NASA Technical Reports Server (NTRS)
Steele, Gynelle C.
1999-01-01
The NASA Lewis Research Center and Flow Parametrics will enter into an agreement to commercialize the National Combustion Code (NCC). This multidisciplinary combustor design system utilizes computer-aided design (CAD) tools for geometry creation, advanced mesh generators for creating solid model representations, a common framework for fluid flow and structural analyses, modern postprocessing tools, and parallel processing. This integrated system can facilitate and enhance various phases of the design and analysis process.
2013-08-14
Communications and Computing, Electrical Engineering and Computer Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar...uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu...Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012
NASA Technical Reports Server (NTRS)
1975-01-01
NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKeown, J.; Labrie, J.P.
1983-08-01
A general purpose finite element computer code called MARC is used to calculate the temperature distribution and dimensional changes in linear accelerator rf structures. Both steady state and transient behaviour are examined with the computer model. Combining results from MARC with the cavity evaluation computer code SUPERFISH, the static and dynamic behaviour of a structure under power is investigated. Structure cooling is studied to minimize loss in shunt impedance and frequency shifts during high power operation. Results are compared with an experimental test carried out on a cw 805 MHz on-axis coupled structure at an energy gradient of 1.8 MeV/m.more » The model has also been used to compare the performance of on-axis and coaxial structures and has guided the mechanical design of structures suitable for average gradients in excess of 2.0 MeV/m at 2.45 GHz.« less
A review of high-speed, convective, heat-transfer computation methods
NASA Technical Reports Server (NTRS)
Tauber, Michael E.
1989-01-01
The objective of this report is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. A discussion is given of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Also discussed are a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.
A review of high-speed, convective, heat-transfer computation methods
NASA Technical Reports Server (NTRS)
Tauber, Michael E.
1989-01-01
The objective is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. Given first is a discussion of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Next is a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.
Heat Transfer Measurements for a Film Cooled Turbine Vane Cascade
NASA Technical Reports Server (NTRS)
Poinsatte, Philip E.; Heidmann, James D.; Thurman, Douglas R.
2008-01-01
Experimental heat transfer and pressure measurements were obtained on a large scale film cooled turbine vane cascade. The objective was to investigate heat transfer on a commercial high pressure first stage turbine vane at near engine Mach and Reynolds number conditions. Additionally blowing ratios and coolant density were also matched. Numerical computations were made with the Glenn-HT code of the same geometry and compared with the experimental results. A transient thermochromic liquid crystal technique was used to obtain steady state heat transfer data on the mid-span geometry of an instrumented vane with 12 rows of circular and shaped film cooling holes. A mixture of SF6 and Argon gases was used for film coolant to match the coolant-to-gas density ratio of a real engine. The exit Mach number and Reynolds number were 0.725 and 2.7 million respectively. Trends from the experimental heat transfer data matched well with the computational prediction, particularly for the film cooled case.
A Thermal Management Systems Model for the NASA GTX RBCC Concept
NASA Technical Reports Server (NTRS)
Traci, Richard M.; Farr, John L., Jr.; Laganelli, Tony; Walker, James (Technical Monitor)
2002-01-01
The Vehicle Integrated Thermal Management Analysis Code (VITMAC) was further developed to aid the analysis, design, and optimization of propellant and thermal management concepts for advanced propulsion systems. The computational tool is based on engineering level principles and models. A graphical user interface (GUI) provides a simple and straightforward method to assess and evaluate multiple concepts before undertaking more rigorous analysis of candidate systems. The tool incorporates the Chemical Equilibrium and Applications (CEA) program and the RJPA code to permit heat transfer analysis of both rocket and air breathing propulsion systems. Key parts of the code have been validated with experimental data. The tool was specifically tailored to analyze rocket-based combined-cycle (RBCC) propulsion systems being considered for space transportation applications. This report describes the computational tool and its development and verification for NASA GTX RBCC propulsion system applications.
SPRAI: coupling of radiative feedback and primordial chemistry in moving mesh hydrodynamics
NASA Astrophysics Data System (ADS)
Jaura, O.; Glover, S. C. O.; Klessen, R. S.; Paardekooper, J.-P.
2018-04-01
In this paper, we introduce a new radiative transfer code SPRAI (Simplex Photon Radiation in the Arepo Implementation) based on the SIMPLEX radiation transfer method. This method, originally used only for post-processing, is now directly integrated into the AREPO code and takes advantage of its adaptive unstructured mesh. Radiated photons are transferred from the sources through the series of Voronoi gas cells within a specific solid angle. From the photon attenuation, we derive corresponding photon fluxes and ionization rates and feed them to a primordial chemistry module. This gives us a self-consistent method for studying dynamical and chemical processes caused by ionizing sources in primordial gas. Since the computational cost of the SIMPLEX method does not scale directly with the number of sources, it is convenient for studying systems such as primordial star-forming haloes that may form multiple ionizing sources.
Data Transfer Study HPSS Archiving
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, James; Parete-Koon, Suzanne T; Mitchell, Quinn
2015-01-01
The movement of the large amounts of data produced by codes run in a High Performance Computing (HPC) environment can be a bottleneck for project workflows. To balance filesystem capacity and performance requirements, HPC centers enforce data management policies to purge old files to make room for new computation and analysis results. Users at Oak Ridge Leadership Computing Facility (OLCF) and many other HPC user facilities must archive data to avoid data loss during purges, therefore the time associated with data movement for archiving is something that all users must consider. This study observed the difference in transfer speed frommore » the originating location on the Lustre filesystem to the more permanent High Performance Storage System (HPSS). The tests were done with a number of different transfer methods for files that spanned a variety of sizes and compositions that reflect OLCF user data. This data will be used to help users of Titan and other Cray supercomputers plan their workflow and data transfers so that they are most efficient for their project. We will also discuss best practice for maintaining data at shared user facilities.« less
Quantum Engineering of Dynamical Gauge Fields on Optical Lattices
2016-07-08
exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian
49 CFR 395.16 - Electronic on-board recording devices.
Code of Federal Regulations, 2010 CFR
2010-10-01
... transfer through wired and wireless methods to portable computers used by roadside safety assurance... the results of power-on self-tests and diagnostic error codes. (e) Date and time. (1) The date and... part. Wireless communication information interchange methods must comply with the requirements of the...
Assessment of polarization effect on aerosol retrievals from MODIS
NASA Astrophysics Data System (ADS)
Korkin, S.; Lyapustin, A.
2010-12-01
Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).
NASA Astrophysics Data System (ADS)
Harijishnu, R.; Jayakumar, J. S.
2017-09-01
The main objective of this paper is to study the heat transfer rate of thermal radiation in participating media. For that, a generated collimated beam has been passed through a two dimensional slab model of flint glass with a refractive index 2. Both Polar and azimuthal angle have been varied to generate such a beam. The Temperature of the slab and Snells law has been validated by Radiation Transfer Equation (RTE) in OpenFOAM (Open Field Operation and Manipulation), a CFD software which is the major computational tool used in Industry and research applications where the source code is modified in which radiation heat transfer equation is added to the case and different radiation heat transfer models are utilized. This work concentrates on the numerical strategies involving both transparent and participating media. Since Radiation Transfer Equation (RTE) is difficult to solve, the purpose of this paper is to use existing solver buoyantSimlpeFoam to solve radiation model in the participating media by compiling the source code to obtain the heat transfer rate inside the slab by varying the Intensity of radiation. The Finite Volume Method (FVM) is applied to solve the Radiation Transfer Equation (RTE) governing the above said physical phenomena.
Multidisciplinary analysis of actively controlled large flexible spacecraft
NASA Technical Reports Server (NTRS)
Cooper, Paul A.; Young, John W.; Sutter, Thomas R.
1986-01-01
The control of Flexible Structures (COFS) program has supported the development of an analysis capability at the Langley Research Center called the Integrated Multidisciplinary Analysis Tool (IMAT) which provides an efficient data storage and transfer capability among commercial computer codes to aid in the dynamic analysis of actively controlled structures. IMAT is a system of computer programs which transfers Computer-Aided-Design (CAD) configurations, structural finite element models, material property and stress information, structural and rigid-body dynamic model information, and linear system matrices for control law formulation among various commercial applications programs through a common database. Although general in its formulation, IMAT was developed specifically to aid in the evaluation of the structures. A description of the IMAT system and results of an application of the system are given.
The VLBA correlator: Real-time in the distributed era
NASA Technical Reports Server (NTRS)
Wells, D. C.
1992-01-01
The correlator is the signal processing engine of the Very Long Baseline Array (VLBA). Radio signals are recorded on special wideband (128 Mb/s) digital recorders at the 10 telescopes, with sampling times controlled by hydrogen maser clocks. The magnetic tapes are shipped to the Array Operations Center in Socorro, New Mexico, where they are played back simultaneously into the correlator. Real-time software and firmware controls the playback drives to achieve synchronization, compute models of the wavefront delay, control the numerous modules of the correlator, and record FITS files of the fringe visibilities at the back-end of the correlator. In addition to the more than 3000 custom VLSI chips which handle the massive data flow of the signal processing, the correlator contains a total of more than 100 programmable computers, 8-, 16- and 32-bit CPUs. Code is downloaded into front-end CPU's dependent on operating mode. Low-level code is assembly language, high-level code is C running under a RT OS. We use VxWorks on Motorola MVME147 CPU's. Code development is on a complex of SPARC workstations connected to the RT CPU's by Ethernet. The overall management of the correlation process is dependent on a database management system. We use Ingres running on a Sparcstation-2. We transfer logging information from the database of the VLBA Monitor and Control System to our database using Ingres/NET. Job scripts are computed and are transferred to the real-time computers using NFS, and correlation job execution logs and status flow back by the route. Operator status and control displays use windows on workstations, interfaced to the real-time processes by network protocols. The extensive network protocol support provided by VxWorks is invaluable. The VLBA Correlator's dependence on network protocols is an example of the radical transformation of the real-time world over the past five years. Real-time is becoming more like conventional computing. Paradoxically, 'conventional' computing is also adopting practices from the real-time world: semaphores, shared memory, light-weight threads, and concurrency. This appears to be a convergence of thinking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
DRA/NASA/ONERA Collaboration on Icing Research. Part 2; Prediction of Airfoil Ice Accretion
NASA Technical Reports Server (NTRS)
Wright, William B.; Gent, R. W.; Guffond, Didier
1997-01-01
This report presents results from a joint study by DRA, NASA, and ONERA for the purpose of comparing, improving, and validating the aircraft icing computer codes developed by each agency. These codes are of three kinds: (1) water droplet trajectory prediction, (2) ice accretion modeling, and (3) transient electrothermal deicer analysis. In this joint study, the agencies compared their code predictions with each other and with experimental results. These comparison exercises were published in three technical reports, each with joint authorship. DRA published and had first authorship of Part 1 - Droplet Trajectory Calculations, NASA of Part 2 - Ice Accretion Prediction, and ONERA of Part 3 - Electrothermal Deicer Analysis. The results cover work done during the period from August 1986 to late 1991. As a result, all of the information in this report is dated. Where necessary, current information is provided to show the direction of current research. In this present report on ice accretion, each agency predicted ice shapes on two dimensional airfoils under icing conditions for which experimental ice shapes were available. In general, all three codes did a reasonable job of predicting the measured ice shapes. For any given experimental condition, one of the three codes predicted the general ice features (i.e., shape, impingement limits, mass of ice) somewhat better than did the other two. However, no single code consistently did better than the other two over the full range of conditions examined, which included rime, mixed, and glaze ice conditions. In several of the cases, DRA showed that the user's knowledge of icing can significantly improve the accuracy of the code prediction. Rime ice predictions were reasonably accurate and consistent among the codes, because droplets freeze on impact and the freezing model is simple. Glaze ice predictions were less accurate and less consistent among the codes, because the freezing model is more complex and is critically dependent upon unsubstantiated heat transfer and surface roughness models. Thus, heat transfer prediction methods used in the codes became the subject for a separate study in this report to compare predicted heat transfer coefficients with a limited experimental database of heat transfer coefficients for cylinders with simulated glaze and rime ice shapes. The codes did a good job of predicting heat transfer coefficients near the stagnation region of the ice shapes. But in the region of the ice horns, all three codes predicted heat transfer coefficients considerably higher than the measured values. An important conclusion of this study is that further research is needed to understand the finer detail of of the glaze ice accretion process and to develop improved glaze ice accretion models.
Transient Heat Transfer in Coated Superconductors.
1982-10-29
of the use of the SCEPTRE code are contained in the instruction manual and the book on the code. 30 An example of an actual SCEPTRE program is given in...22. 0. Tsukomoto and S. Kobayashi, J. of Appl. Physics, 46, 1359, (1975) 23. Y Iwasa and B.A. Apgar , Cryogenics 18, 267, (1978) 24. D.E. Baynham, V.W...Computer program for circuit and Systems Analysis. Prentice Hall 1971 and J.C. Bowers et. al. Users Manual for Super-Sceptre Government Document AD/A-OIl
An Analysis of Elliptic Grid Generation Techniques Using an Implicit Euler Solver.
1986-06-09
automatic determination of the control fu.nction, . elements of covariant metric tensor in the elliptic grid generation system , from the Cm = 1,2,3...computational fluid d’nan1-cs code. Tne code Inclues a tnree-dimensional current research is aimed primaril: at algebraic generation system based on transfinite...start the iterative solution of the f. ow, nea, transfer, and combustion proble:s. elliptic generation system . Tn13 feature also .:ven-.ts :.t be made
Irradiation-driven Mass Transfer Cycles in Compact Binaries
NASA Astrophysics Data System (ADS)
Büning, A.; Ritter, H.
2005-08-01
We elaborate on the analytical model of Ritter, Zhang, & Kolb (2000) which describes the basic physics of irradiation-driven mass transfer cycles in semi-detached compact binary systems. In particular, we take into account a contribution to the thermal relaxation of the donor star which is unrelated to irradiation and which was neglected in previous studies. We present results of simulations of the evolution of compact binaries undergoing mass transfer cycles, in particular also of systems with a nuclear evolved donor star. These computations have been carried out with a stellar evolution code which computes mass transfer implicitly and models irradiation of the donor star in a point source approximation, thereby allowing for much more realistic simulations than were hitherto possible. We find that low-mass X-ray binaries (LMXBs) and cataclysmic variables (CVs) with orbital periods ⪉ 6hr can undergo mass transfer cycles only for low angular momentum loss rates. CVs containing a giant donor or one near the terminal age main sequence are more stable than previously thought, but can possibly also undergo mass transfer cycles.
Optimization of lightweight structure and supporting bipod flexure for a space mirror.
Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming
2016-12-20
This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.
Porting ONETEP to graphical processing unit-based coprocessors. 1. FFT box operations.
Wilkinson, Karl; Skylaris, Chris-Kriton
2013-10-30
We present the first graphical processing unit (GPU) coprocessor-enabled version of the Order-N Electronic Total Energy Package (ONETEP) code for linear-scaling first principles quantum mechanical calculations on materials. This work focuses on porting to the GPU the parts of the code that involve atom-localized fast Fourier transform (FFT) operations. These are among the most computationally intensive parts of the code and are used in core algorithms such as the calculation of the charge density, the local potential integrals, the kinetic energy integrals, and the nonorthogonal generalized Wannier function gradient. We have found that direct porting of the isolated FFT operations did not provide any benefit. Instead, it was necessary to tailor the port to each of the aforementioned algorithms to optimize data transfer to and from the GPU. A detailed discussion of the methods used and tests of the resulting performance are presented, which show that individual steps in the relevant algorithms are accelerated by a significant amount. However, the transfer of data between the GPU and host machine is a significant bottleneck in the reported version of the code. In addition, an initial investigation into a dynamic precision scheme for the ONETEP energy calculation has been performed to take advantage of the enhanced single precision capabilities of GPUs. The methods used here result in no disruption to the existing code base. Furthermore, as the developments reported here concern the core algorithms, they will benefit the full range of ONETEP functionality. Our use of a directive-based programming model ensures portability to other forms of coprocessors and will allow this work to form the basis of future developments to the code designed to support emerging high-performance computing platforms. Copyright © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Holden, Michael S.; Harvey, John K.; Boyd, Iain D.; George, Jyothish; Horvath, Thomas J.
1997-01-01
This paper summarizes the results of a series of experimental studies in the LENS shock tunnel and computations with DSMC and Navier Stokes codes which have been made to examine the aerothermal and flowfield characteristics of the flow over a sting-supported planetary probe configuration in hypervelocity air and nitrogen flows. The experimental program was conducted in the LENS hypervelocity shock tunnel at total enthalpies of 5and 10 MJkg for a range of reservoir pressure conditions from 70 to 500 bars. Heat transfer and pressure measurements were made on the front and rear face of the probe and along the supporting sting. High-speed and single shot schlieren photography were also employed to examine the flow over the model and the time to establish the flow in the base recirculation region. Predictions of the flowfield characteristics and the distributions of heat transfer and pressure were made with DSMC codes for rarefied flow conditions and with the Navier-Stokes solvers for the higher pressure conditions where the flows were assumed to be laminar. Analysis of the time history records from the heat transfer and pressure instrumentation on the face of the probe and in the base region indicated that the base flow was fully established in under 4 milliseconds from flow initiation or between 35 and 50 flow lengths based on base height. The measurements made in three different tunnel entries with two models of identical geometries but with different instrumentation packages, one prepared by NASA Langley and the second prepared by CUBRC, demonstrated good agreement between heat transfer measurements made with two different types of thin film and coaxial gage instrumentation. The measurements of heat transfer and pressure to the front face of the probe were in good agreement with theoretical predictions from both the DSMC and Navier Stokes codes. For the measurements made in low density flows, computations with the DSMC code were found to compare well with the pressure and heat transfer measurements on the sting, although the computed heat transfer rates in the recirculation region did not exhibit the same characteristics as the measurements. For the 10MJkg and 500 bar reservoir match point condition, the measurements and heat transfer along the sting from the first group of studies were in agreement with the Navier Stokes solutions for laminar conditions. A similar set of measurements made in later tests where the model was moved to a slightly different position in the test section indicated that the boundary layer in the reattachment compression region was close to transition or transitional where small changes in the test environment can result in larger than laminar heating rates. The maximum heating coefficients on the sting observed in the present studies was a small fraction of similar measurements obtained at nominally the same conditions in the HEG shock tunnel, where it is possible for transition to occur in the base flow, and in the low enthalpy studies conducted in the NASA Langley high Reynolds number Mach 10 tunnel where the base flow was shown to be turbulent. While the hybrid Navier- StokedDMSC calculations by Gochberg et al. (Reference 1) suggested that employing the Navier- Stokes calculations for the entire flowfield could be seriously in error in the base region for the 10 MJkg, 500 bar test case, similar calculations performed by Cornell, presented here, do not.
Stagnation-point heat-transfer rate predictions at aeroassist flight conditions
NASA Technical Reports Server (NTRS)
Gupta, Roop N.; Jones, Jim J.; Rochelle, William C.
1992-01-01
The results are presented for the stagnation-point heat-transfer rates used in the design process of the Aeroassist Flight Experiment (AFE) vehicle over its entire aeropass trajectory. The prediction methods used in this investigation demonstrate the application of computational fluid dynamics (CFD) techniques to a wide range of flight conditions and their usefulness in a design process. The heating rates were computed by a viscous-shock-layer (VSL) code at the lower altitudes and by a Navier-Stokes (N-S) code for the higher altitude cases. For both methods, finite-rate chemically reacting gas was considered, and a temperature-dependent wall-catalysis model was used. The wall temperature for each case was assumed to be radiative equilibrium temperature, based on total heating. The radiative heating was estimated by using a correlation equation. Wall slip was included in the N-S calculation method, and this method implicitly accounts for shock slip. The N-S/VSL combination of projection methods was established by comparison with the published benchmark flow-field code LAURA results at lower altitudes, and the direct simulation Monte Carlo results at higher altitude cases. To obtain the design heating rate over the entire forward face of the vehicle, a boundary-layer method (BLIMP code) that employs reacting chemistry and surface catalysis was used. The ratio of the VSL or N-S method prediction to that obtained from the boundary-layer method code at the stagnation point is used to define an adjustment factor, which accounts for the errors involved in using the boundary-layer method.
Turbulence modeling of free shear layers for high performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas
1993-01-01
In many flowfield computations, accuracy of the turbulence model employed is frequently a limiting factor in the overall accuracy of the computation. This is particularly true for complex flowfields such as those around full aircraft configurations. Free shear layers such as wakes, impinging jets (in V/STOL applications), and mixing layers over cavities are often part of these flowfields. Although flowfields have been computed for full aircraft, the memory and CPU requirements for these computations are often excessive. Additional computer power is required for multidisciplinary computations such as coupled fluid dynamics and conduction heat transfer analysis. Massively parallel computers show promise in alleviating this situation, and the purpose of this effort was to adapt and optimize CFD codes to these new machines. The objective of this research effort was to compute the flowfield and heat transfer for a two-dimensional jet impinging normally on a cool plate. The results of this research effort were summarized in an AIAA paper titled 'Parallel Implementation of the k-epsilon Turbulence Model'. Appendix A contains the full paper.
NASA Technical Reports Server (NTRS)
Wang, C. R.; Hingst, W. R.; Porro, A. R.
1991-01-01
The properties of 2-D shock wave/turbulent boundary layer interaction flows were calculated by using a compressible turbulent Navier-Stokes numerical computational code. Interaction flows caused by oblique shock wave impingement on the turbulent boundary layer flow were considered. The oblique shock waves were induced with shock generators at angles of attack less than 10 degs in supersonic flows. The surface temperatures were kept at near-adiabatic (ratio of wall static temperature to free stream total temperature) and cold wall (ratio of wall static temperature to free stream total temperature) conditions. The computational results were studied for the surface heat transfer, velocity temperature correlation, and turbulent shear stress in the interaction flow fields. Comparisons of the computational results with existing measurements indicated that (1) the surface heat transfer rates and surface pressures could be correlated with Holden's relationship, (2) the mean flow streamwise velocity components and static temperatures could be correlated with Crocco's relationship if flow separation did not occur, and (3) the Baldwin-Lomax turbulence model should be modified for turbulent shear stress computations in the interaction flows.
User's manual for the BNW-I optimization code for dry-cooled power plants. Volume I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, D.J.; Daniel, D.J.; De Mier, W.V.
1977-01-01
This User's Manual provides information on the use and operation of three versions of BNW-I, a computer code developed by Battelle, Pacific Northwest Laboratory (PNL) as a part of its activities under the ERDA Dry Cooling Tower Program. These three versions of BNW-I were used as reported elsewhere to obtain comparative incremental costs of electrical power production by two advanced concepts (one using plastic heat exchangers and one using ammonia as an intermediate heat transfer fluid) and a state-of-the-art system. The computer program offers a comprehensive method of evaluating the cost savings potential of dry-cooled heat rejection systems and componentsmore » for power plants. This method goes beyond simple ''figure-of-merit'' optimization of the cooling tower and includes such items as the cost of replacement capacity needed on an annual basis and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence, the BNW-I code is a useful tool for determining potential cost savings of new heat transfer surfaces, new piping or other components as part of an optimized system for a dry-cooled power plant.« less
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
Heat-transfer optimization of a high-spin thermal battery
NASA Astrophysics Data System (ADS)
Krieger, Frank C.
Recent advancements in thermal battery technology have produced batteries incorporating a fusible material heat reservoir for operating temperature control that operate reliably under the high spin rates often encountered in ordnance applications. Attention is presently given to the heat-transfer optimization of a high-spin thermal battery employing a nonfusible steel heat reservoir, on the basis of a computer code that simulated the effect of an actual fusible material heat reservoir on battery performance. Both heat paper and heat pellet employing thermal battery configurations were considered.
Initial verification and validation of RAZORBACK - A research reactor transient analysis code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2015-09-01
This report describes the work and results of the initial verification and validation (V&V) of the beta release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This initial V&V effort was intended to confirm that the code work to-date shows good agreement between simulation and actualmore » ACRR operations, indicating that the subsequent V&V effort for the official release of the code will be successful.« less
Prediction of Film Cooling on Gas Turbine Airfoils
NASA Technical Reports Server (NTRS)
Garg, Vijay K.; Gaugler, Raymond E.
1994-01-01
A three-dimensional Navier-Stokes analysis tool has been developed in order to study the effect of film cooling on the flow and heat transfer characteristics of actual turbine airfoils. An existing code (Arnone et al., 1991) has been modified for the purpose. The code is an explicit, multigrid, cell-centered, finite volume code with an algebraic turbulence model. Eigenvalue scaled artificial dissipation and variable-coefficient implicit residual smoothing are used with a full-multigrid technique. Moreover, Mayle's transition criterion (Mayle, 1991) is used. The effects of film cooling have been incorporated into the code in the form of appropriate boundary conditions at the hole locations on the airfoil surface. Each hole exit is represented by several control volumes, thus providing an ability to study the effect of hole shape on the film-cooling characteristics. Comparison is fair with near mid-span experimental data for four and nine rows of cooling holes, five on the shower head, and two rows each on the pressure and suction surfaces. The computations, however, show a strong spanwise variation of the heat transfer coefficient on the airfoil surface, specially with shower-head cooling.
Numerical Simulations of Dynamical Mass Transfer in Binaries
NASA Astrophysics Data System (ADS)
Motl, P. M.; Frank, J.; Tohline, J. E.
1999-05-01
We will present results from our ongoing research project to simulate dynamically unstable mass transfer in near contact binaries with mass ratios different from one. We employ a fully three-dimensional self-consistent field technique to generate synchronously rotating polytropic binaries. With our self-consistent field code we can create equilibrium binaries where one component is, by radius, within about 99 of filling its Roche lobe for example. These initial configurations are evolved using a three-dimensional, Eulerian hydrodynamics code. We make no assumptions about the symmetry of the subsequent flow and the entire binary system is evolved self-consistently under the influence of its own gravitational potential. For a given mass ratio and polytropic index for the binary components, mass transfer via Roche lobe overflow can be predicted to be stable or unstable through simple theoretical arguments. The validity of the approximations made in the stability calculations are tested against our numerical simulations. We acknowledge support from the U.S. National Science Foundation through grants AST-9720771, AST-9528424, and DGE-9355007. This research has been supported, in part, by grants of high-performance computing time on NPACI facilities at the San Diego Supercomputer Center, the Texas Advanced Computing Center and through the PET program of the NAVOCEANO DoD Major Shared Resource Center in Stennis, MS.
Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer
NASA Astrophysics Data System (ADS)
Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri
2017-02-01
Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self-consistently calculate general-relativistic accretion flows onto compact objects. In addition to the accurate handling of the matter, we provide a self-consistent electromagnetic emission from these scenarios by solving the associated radiative-transfer problem. While magnetic fields are currently excluded from our analysis, the tools presented here can have a number of applications to study accretion flows onto black holes or neutron stars.
A new computational method for the detection of horizontal gene transfer events.
Tsirigos, Aristotelis; Rigoutsos, Isidore
2005-01-01
In recent years, the increase in the amounts of available genomic data has made it easier to appreciate the extent by which organisms increase their genetic diversity through horizontally transferred genetic material. Such transfers have the potential to give rise to extremely dynamic genomes where a significant proportion of their coding DNA has been contributed by external sources. Because of the impact of these horizontal transfers on the ecological and pathogenic character of the recipient organisms, methods are continuously sought that are able to computationally determine which of the genes of a given genome are products of transfer events. In this paper, we introduce and discuss a novel computational method for identifying horizontal transfers that relies on a gene's nucleotide composition and obviates the need for knowledge of codon boundaries. In addition to being applicable to individual genes, the method can be easily extended to the case of clusters of horizontally transferred genes. With the help of an extensive and carefully designed set of experiments on 123 archaeal and bacterial genomes, we demonstrate that the new method exhibits significant improvement in sensitivity when compared to previously published approaches. In fact, it achieves an average relative improvement across genomes of between 11 and 41% compared to the Codon Adaptation Index method in distinguishing native from foreign genes. Our method's horizontal gene transfer predictions for 123 microbial genomes are available online at http://cbcsrv.watson.ibm.com/HGT/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talley, Darren G.
2017-04-01
This report describes the work and results of the verification and validation (V&V) of the version 1.0 release of the Razorback code. Razorback is a computer code designed to simulate the operation of a research reactor (such as the Annular Core Research Reactor (ACRR)) by a coupled numerical solution of the point reactor kinetics equations, the energy conservation equation for fuel element heat transfer, the equation of motion for fuel element thermal expansion, and the mass, momentum, and energy conservation equations for the water cooling of the fuel elements. This V&V effort was intended to confirm that the code showsmore » good agreement between simulation and actual ACRR operations.« less
EPA Remote Sensing Information Gateway
NASA Astrophysics Data System (ADS)
Paulsen, H. K.; Szykman, J. J.; Plessel, T.; Freeman, M.; Dimmick, F.
2009-12-01
The Remote Sensing Information Gateway was developed by the U.S. Environmental Protection Agency (EPA) to assist researchers in easily obtaining and combining a variety of environmental datasets related to air quality research. Current datasets available include, but are not limited to surface PM2.5 and O3 data, satellite derived aerosol optical depth , and 3-dimensional output from U.S. EPA's Models 3/Community Multi-scale Air Quality (CMAQ) modeling system. The presentation will include a demonstration that illustrates several scenarios of how researchers use the tool to help them visualize and obtain data for their work; with a particular focus on episode analysis related to biomass burning impacts on air quality. The presentation will provide an overview on how RSIG works and how the code has been—and can be—adapted for other projects. One example is the Virtual Estuary, which focuses on automating the retrieval and pre-processing of a variety of data needed for estuarine research. RSIG’s source codes are freely available to researchers with permission from the EPA principal investigator, Dr. Jim Szykman. RSIG is available to the community and can be accessed online at http://www.epa.gov/rsig. Once the JAVA policy file is configured on your computer you can run the RSIG applet on your computer and connect to the RSIG server to visualize and retrieve available data sets. The applet allows the user to specify the temporal/spatial areas of interest, and the types of data to retrieve. The applet then communicates with RSIG subsetter codes located on the data owners’ remote servers; the subsetter codes assemble and transfer via ordinary Internet protocols only the specified data to the researcher’s computer. This is much faster than the usual method of transferring large files via FTP and greatly reduces network traffic. The RSIG applet then visualizes the transferred data on a latitude-longitude map, automatically locating the data in the correct geographic position. Images, animations, and aggregated data can be saved or exported in a variety of data formats: Binary External Data Representation (XDR) format (simple, efficient), NetCDF-COARDS format, NetCDF-IOAPI format (regridding the data to a CMAQ grid), HDF (unsubsetted satellite files), ASCII tab-delimited spreadsheet, MCMC (used for input into HB model), PNG images, MPG movies, KMZ movies (for display in Google Earth and similar applications), GeoTIFF RGB format and 32-bit float format. RSIG’s source codes are freely available to researchers with permission from the EPA. Contacts for obtaining RSIG code are available at the RSIG website.
NASA Technical Reports Server (NTRS)
Harloff, G. J.; Lai, H. T.; Nelson, E. S.
1988-01-01
The PARC2D code has been selected to analyze the flowfields of a representative hypersonic scramjet nozzle over a range of flight conditions from Mach 3 to 20. The flowfields, wall pressures, wall skin friction values, heat transfer values and overall nozzle performance are presented.
Transient PVT measurements and model predictions for vessel heat transfer. Part II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Winters, William S., Jr.
2010-07-01
Part I of this report focused on the acquisition and presentation of transient PVT data sets that can be used to validate gas transfer models. Here in Part II we focus primarily on describing models and validating these models using the data sets. Our models are intended to describe the high speed transport of compressible gases in arbitrary arrangements of vessels, tubing, valving and flow branches. Our models fall into three categories: (1) network flow models in which flow paths are modeled as one-dimensional flow and vessels are modeled as single control volumes, (2) CFD (Computational Fluid Dynamics) models inmore » which flow in and between vessels is modeled in three dimensions and (3) coupled network/CFD models in which vessels are modeled using CFD and flows between vessels are modeled using a network flow code. In our work we utilized NETFLOW as our network flow code and FUEGO for our CFD code. Since network flow models lack three-dimensional resolution, correlations for heat transfer and tube frictional pressure drop are required to resolve important physics not being captured by the model. Here we describe how vessel heat transfer correlations were improved using the data and present direct model-data comparisons for all tests documented in Part I. Our results show that our network flow models have been substantially improved. The CFD modeling presented here describes the complex nature of vessel heat transfer and for the first time demonstrates that flow and heat transfer in vessels can be modeled directly without the need for correlations.« less
Software for Collaborative Engineering of Launch Rockets
NASA Technical Reports Server (NTRS)
Stanley, Thomas Troy
2003-01-01
The Rocket Evaluation and Cost Integration for Propulsion and Engineering software enables collaborative computing with automated exchange of information in the design and analysis of launch rockets and other complex systems. RECIPE can interact with and incorporate a variety of programs, including legacy codes, that model aspects of a system from the perspectives of different technological disciplines (e.g., aerodynamics, structures, propulsion, trajectory, aeroheating, controls, and operations) and that are used by different engineers on different computers running different operating systems. RECIPE consists mainly of (1) ISCRM a file-transfer subprogram that makes it possible for legacy codes executed in their original operating systems on their original computers to exchange data and (2) CONES an easy-to-use filewrapper subprogram that enables the integration of legacy codes. RECIPE provides a tightly integrated conceptual framework that emphasizes connectivity among the programs used by the collaborators, linking these programs in a manner that provides some configuration control while facilitating collaborative engineering tradeoff studies, including design to cost studies. In comparison with prior collaborative-engineering schemes, one based on the use of RECIPE enables fewer engineers to do more in less time.
Modeling of Non-Isothermal Cryogenic Fluid Sloshing
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Moder, Jeffrey P.
2015-01-01
A computational fluid dynamic model was used to simulate the thermal destratification in an upright self-pressurized cryostat approximately half-filled with liquid nitrogen and subjected to forced sinusoidal lateral shaking. A full three-dimensional computational grid was used to model the tank dynamics, fluid flow and thermodynamics using the ANSYS Fluent code. A non-inertial grid was used which required the addition of momentum and energy source terms to account for the inertial forces, energy transfer and wall reaction forces produced by the shaken tank. The kinetics-based Schrage mass transfer model provided the interfacial mass transfer due to evaporation and condensation at the sloshing interface. The dynamic behavior of the sloshing interface, its amplitude and transition to different wave modes, provided insight into the fluid process at the interface. The tank pressure evolution and temperature profiles compared relatively well with the shaken cryostat experimental test data provided by the Centre National D'Etudes Spatiales.
Validation of hydrogen gas stratification and mixing models
Wu, Hsingtzu; Zhao, Haihua
2015-05-26
Two validation benchmarks confirm that the BMIX++ code is capable of simulating unintended hydrogen release scenarios efficiently. The BMIX++ (UC Berkeley mechanistic MIXing code in C++) code has been developed to accurately and efficiently predict the fluid mixture distribution and heat transfer in large stratified enclosures for accident analyses and design optimizations. The BMIX++ code uses a scaling based one-dimensional method to achieve large reduction in computational effort compared to a 3-D computational fluid dynamics (CFD) simulation. Two BMIX++ benchmark models have been developed. One is for a single buoyant jet in an open space and another is for amore » large sealed enclosure with both a jet source and a vent near the floor. Both of them have been validated by comparisons with experimental data. Excellent agreements are observed. The entrainment coefficients of 0.09 and 0.08 are found to fit the experimental data for hydrogen leaks with the Froude number of 99 and 268 best, respectively. In addition, the BIX++ simulation results of the average helium concentration for an enclosure with a vent and a single jet agree with the experimental data within a margin of about 10% for jet flow rates ranging from 1.21 × 10⁻⁴ to 3.29 × 10⁻⁴ m³/s. In conclusion, computing time for each BMIX++ model with a normal desktop computer is less than 5 min.« less
NASA Astrophysics Data System (ADS)
Özel, Tuğrul; Arısoy, Yiğit M.; Criales, Luis E.
Computational modelling of Laser Powder Bed Fusion (L-PBF) processes such as Selective laser Melting (SLM) can reveal information that is hard to obtain or unobtainable by in-situ experimental measurements. A 3D thermal field that is not visible by the thermal camera can be obtained by solving the 3D heat transfer problem. Furthermore, microstructural modelling can be used to predict the quality and mechanical properties of the product. In this paper, a nonlinear 3D Finite Element Method based computational code is developed to simulate the SLM process with different process parameters such as laser power and scan velocity. The code is further improved by utilizing an in-situ thermal camera recording to predict spattering which is in turn included as a stochastic heat loss. Then, thermal gradients extracted from the simulations applied to predict growth directions in the resulting microstructure.
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work on which is under way.
Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.; Flores, Jolen
1989-01-01
Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.
Optimal low thrust geocentric transfer. [mission analysis computer program
NASA Technical Reports Server (NTRS)
Edelbaum, T. N.; Sackett, L. L.; Malchow, H. L.
1973-01-01
A computer code which will rapidly calculate time-optimal low thrust transfers is being developed as a mission analysis tool. The final program will apply to NEP or SEP missions and will include a variety of environmental effects. The current program assumes constant acceleration. The oblateness effect and shadowing may be included. Detailed state and costate equations are given for the thrust effect, oblateness effect, and shadowing. A simple but adequate model yields analytical formulas for power degradation due to the Van Allen radiation belts for SEP missions. The program avoids the classical singularities by the use of equinoctial orbital elements. Kryloff-Bogoliuboff averaging is used to facilitate rapid calculation. Results for selected cases using the current program are given.
A keyboard control method for loop measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Z.W.
1994-12-31
This paper describes a keyboard control mode based on the DEC VAX computer. The VAX Keyboard code can be found under running of a program was developed. During the loop measurement or multitask operation, it ables to be distinguished from a keyboard code to stop current operation or transfer to another operation while previous information can be held. The combining of this mode, the author successfully used one key control loop measurement for test Dual Input Memory module which is used in a rearrange Energy Trigger system for LEP 8 Bunch operation.
A simple method for computing the relativistic Compton scattering kernel for radiative transfer
NASA Technical Reports Server (NTRS)
Prasad, M. K.; Kershaw, D. S.; Beason, J. D.
1986-01-01
Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.
Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, A.; Kabel, A.; Lee, L.
In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).
HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.; Lessor, D.L.
1987-09-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations formore » conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.« less
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
NASA Astrophysics Data System (ADS)
Urquiza, Eugenio
This work presents a comprehensive thermal hydraulic analysis of a compact heat exchanger using offset strip fins. The thermal hydraulics analysis in this work is followed by a finite element analysis (FEA) to predict the mechanical stresses experienced by an intermediate heat exchanger (IHX) during steady-state operation and selected flow transients. In particular, the scenario analyzed involves a gas-to-liquid IHX operating between high pressure helium and liquid or molten salt. In order to estimate the stresses in compact heat exchangers a comprehensive thermal and hydraulic analysis is needed. Compact heat exchangers require very small flow channels and fins to achieve high heat transfer rates and thermal effectiveness. However, studying such small features computationally contributes little to the understanding of component level phenomena and requires prohibitive computational effort using computational fluid dynamics (CFD). To address this issue, the analysis developed here uses an effective porous media (EPM) approach; this greatly reduces the computation time and produces results with the appropriate resolution [1]. This EPM fluid dynamics and heat transfer computational code has been named the Compact Heat Exchanger Explicit Thermal and Hydraulics (CHEETAH) code. CHEETAH solves for the two-dimensional steady-state and transient temperature and flow distributions in the IHX including the complicating effects of temperature-dependent fluid thermo-physical properties. Temperature- and pressure-dependent fluid properties are evaluated by CHEETAH and the thermal effectiveness of the IHX is also calculated. Furthermore, the temperature distribution can then be imported into a finite element analysis (FEA) code for mechanical stress analysis using the EPM methods developed earlier by the University of California, Berkeley, for global and local stress analysis [2]. These simulation tools will also allow the heat exchanger design to be improved through an iterative design process which will lead to a design with a reduced pressure drop, increased thermal effectiveness, and improved mechanical performance as it relates to creep deformation and transient thermal stresses.
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2001-01-01
The purpose of this report was to analyze the heat-transfer problem posed by the determination of spacecraft temperatures and to incorporate the theoretically derived relationships in the computational code TSCALC. The basis for the code was a theoretical analysis of the thermal radiative equilibrium in space, particularly in the Solar System. Beginning with the solar luminosity, the code takes into account these key variables: (1) the spacecraft-to-Sun distance expressed in astronomical units (AU), where 1 AU represents the average Sun-to-Earth distance of 149.6 million km; (2) the angle (arc degrees) at which solar radiation is incident upon a spacecraft surface (ILUMANG); (3) the spacecraft surface temperature (a radiator or photovoltaic array) in kelvin, the surface absorptivity-to-emissivity ratio alpha/epsilon with respect to the solar radiation and (alpha/epsilon)(sub 2) with respect to planetary radiation; and (4) the surface view factor to space F. Outputs from the code have been used to determine environmental temperatures in various Earth orbits. The code was also utilized as a subprogram in the design of power system radiators for deep-space probes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
NASA Technical Reports Server (NTRS)
Garg, Vijay K.
2001-01-01
The turbine gas path is a very complex flow field. This is due to a variety of flow and heat transfer phenomena encountered in turbine passages. This manuscript provides an overview of the current work in this field at the NASA Glenn Research Center. Also, based on the author's preference, more emphasis is on the computational work. There is much more experimental work in progress at GRC than that reported here. While much has been achieved, more needs to be done in terms of validating the predictions against experimental data. More experimental data, especially on film cooled and rough turbine blades, are required for code validation. Also, the combined film cooling and internal cooling flow computation for a real blade is yet to be performed. While most computational work to date has assumed steady state conditions, the flow is clearly unsteady due to the presence of wakes. All this points to a long road ahead. However, we are well on course.
Gkigkitzis, Ioannis; Austerlitz, Carlos; Haranas, Ioannis; Campos, Diana
2015-01-01
The aim of this report is to propose a new methodology to treat prostate cancer with macro-rod-shaped gold seeds irradiated with ultrasound and develop a new computational method for temperature and thermal dose control of hyperthermia therapy induced by the proposed procedure. A computer code representation, based on the bio-heat diffusion equation, was developed to calculate the heat deposition and temperature elevation patterns in a gold rod and in the tissue surrounding it as a result of different therapy durations and ultrasound power simulations. The numerical results computed provide quantitative information on the interaction between high-energy ultrasound, gold seeds and biological tissues and can replicate the pattern observed in experimental studies. The effect of differences in shapes and sizes of gold rod targets irradiated with ultrasound is calculated and the heat enhancement and the bio-heat transfer in tissue are analyzed.
Viscoelastic Finite Difference Modeling Using Graphics Processing Units
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.
2014-12-01
Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.
NASA Astrophysics Data System (ADS)
Bird, Robert; Nystrom, David; Albright, Brian
2017-10-01
The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.
Computer Modeling of Direct Metal Laser Sintering
NASA Technical Reports Server (NTRS)
Cross, Matthew
2014-01-01
A computational approach to modeling direct metal laser sintering (DMLS) additive manufacturing process is presented. The primary application of the model is for determining the temperature history of parts fabricated using DMLS to evaluate residual stresses found in finished pieces and to assess manufacturing process strategies to reduce part slumping. The model utilizes MSC SINDA as a heat transfer solver with imbedded FORTRAN computer code to direct laser motion, apply laser heating as a boundary condition, and simulate the addition of metal powder layers during part fabrication. Model results are compared to available data collected during in situ DMLS part manufacture.
Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing
NASA Technical Reports Server (NTRS)
Navaz, Homayun K.
2002-01-01
Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.
Validation Data and Model Development for Fuel Assembly Response to Seismic Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardet, Philippe; Ricciardi, Guillaume
2016-01-31
Vibrations are inherently present in nuclear reactors, especially in cores and steam generators of pressurized water reactors (PWR). They can have significant effects on local heat transfer and wear and tear in the reactor and often set safety margins. The simulation of these multiphysics phenomena from first principles requires the coupling of several codes, which is one the most challenging tasks in modern computer simulation. Here an ambitious multiphysics multidisciplinary validation campaign is conducted. It relied on an integrated team of experimentalists and code developers to acquire benchmark and validation data for fluid-structure interaction codes. Data are focused on PWRmore » fuel bundle behavior during seismic transients.« less
NASA Technical Reports Server (NTRS)
Gray, Carl E., Jr.
1988-01-01
Using the Newtonian method, the equations of motion are developed for the coupled bending-torsion steady-state response of beams rotating at constant angular velocity in a fixed plane. The resulting equations are valid to first order strain-displacement relationships for a long beam with all other nonlinear terms retained. In addition, the equations are valid for beams with the mass centroidal axis offset (eccentric) from the elastic axis, nonuniform mass and section properties, and variable twist. The solution of these coupled, nonlinear, nonhomogeneous, differential equations is obtained by modifying a Hunter linear second-order transfer-matrix solution procedure to solve the nonlinear differential equations and programming the solution for a desk-top personal computer. The modified transfer-matrix method was verified by comparing the solution for a rotating beam with a geometric, nonlinear, finite-element computer code solution; and for a simple rotating beam problem, the modified method demonstrated a significant advantage over the finite-element solution in accuracy, ease of solution, and actual computer processing time required to effect a solution.
RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H
2010-06-01
ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less
SKIRT: Hybrid parallelization of radiative transfer simulations
NASA Astrophysics Data System (ADS)
Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.
2017-07-01
We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.
NASA Technical Reports Server (NTRS)
Arnold, J. O.
1987-01-01
With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.
Seals Code Development Workshop
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)
1996-01-01
Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.
NASA Astrophysics Data System (ADS)
Martin, William G. K.; Hasekamp, Otto P.
2018-01-01
In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote sensing problems.
The Initial Atmospheric Transport (IAT) Code: Description and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, Charles W.; Bartel, Timothy James
The Initial Atmospheric Transport (IAT) computer code was developed at Sandia National Laboratories as part of their nuclear launch accident consequences analysis suite of computer codes. The purpose of IAT is to predict the initial puff/plume rise resulting from either a solid rocket propellant or liquid rocket fuel fire. The code generates initial conditions for subsequent atmospheric transport calculations. The Initial Atmospheric Transfer (IAT) code has been compared to two data sets which are appropriate to the design space of space launch accident analyses. The primary model uncertainties are the entrainment coefficients for the extended Taylor model. The Titan 34Dmore » accident (1986) was used to calibrate these entrainment settings for a prototypic liquid propellant accident while the recent Johns Hopkins University Applied Physics Laboratory (JHU/APL, or simply APL) large propellant block tests (2012) were used to calibrate the entrainment settings for prototypic solid propellant accidents. North American Meteorology (NAM )formatted weather data profiles are used by IAT to determine the local buoyancy force balance. The IAT comparisons for the APL solid propellant tests illustrate the sensitivity of the plume elevation to the weather profiles; that is, the weather profile is a dominant factor in determining the plume elevation. The IAT code performed remarkably well and is considered validated for neutral weather conditions.« less
Smart photodetector arrays for error control in page-oriented optical memory
NASA Astrophysics Data System (ADS)
Schaffer, Maureen Elizabeth
1998-12-01
Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data modulation and error correction coding for the purpose of error control in the POM system. These techniques are adapted, where possible, for 2D data and evaluated as to their suitability for a SPA implementation in terms of BER, code rate, decoder time and pixel complexity. Our analysis shows that differential data modulation combined with relatively simple block codes known as array codes provide a powerful means to achieve the desired data transfer rates while reducing error rates to industry requirements. Finally, we demonstrate the first smart photodetector array designed to perform parallel error correction on an entire page of data and satisfy the sustained data rates of page-oriented optical memories. Our implementation integrates a monolithic PN photodiode array and differential input receiver for optoelectronic signal conversion with a cluster error correction code using 0.35-mum CMOS. This approach provides high sensitivity, low electrical power dissipation, and fast parallel correction of 2 x 2-bit cluster errors in an 8 x 8 bit code block to achieve corrected output data rates scalable to 102 Gbps in the current technology increasing to 1.88 Tbps in 0.1-mum CMOS.
Crashworthiness: Planes, trains, and automobiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, R.W.; Tokarz, F.J.; Whirley, R.G.
A powerful DYNA3D computer code simulates the dynamic effects of stress traveling through structures. It is the most advanced modeling tool available to study crashworthiness problems and to analyze impacts. Now used by some 1000 companies, government research laboratories, and universities in the U.S. and abroad, DYNA3D is also a preeminent example of successful technology transfer. The initial interest in such a code was to simulate the structural response of weapons systems. The need was to model not the explosive or nuclear events themselves but rather the impacts of weapons systems with the ground, tracking the stress waves as theymore » move through the object. This type of computer simulation augmented or, in certain cases, reduced the need for expensive and time-consuming crash testing.« less
Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha
2012-11-01
Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.
History of one family of atmospheric radiative transfer codes
NASA Astrophysics Data System (ADS)
Anderson, Gail P.; Wang, Jinxue; Hoke, Michael L.; Kneizys, F. X.; Chetwynd, James H., Jr.; Rothman, Laurence S.; Kimball, L. M.; McClatchey, Robert A.; Shettle, Eric P.; Clough, Shepard (.; Gallery, William O.; Abreu, Leonard W.; Selby, John E. A.
1994-12-01
Beginning in the early 1970's, the then Air Force Cambridge Research Laboratory initiated a program to develop computer-based atmospheric radiative transfer algorithms. The first attempts were translations of graphical procedures described in a 1970 report on The Optical Properties of the Atmosphere, based on empirical transmission functions and effective absorption coefficients derived primarily from controlled laboratory transmittance measurements. The fact that spectrally-averaged atmospheric transmittance (T) does not obey the Beer-Lambert Law (T equals exp(-(sigma) (DOT)(eta) ), where (sigma) is a species absorption cross section, independent of (eta) , the species column amount along the path) at any but the finest spectral resolution was already well known. Band models to describe this gross behavior were developed in the 1950's and 60's. Thus began LOWTRAN, the Low Resolution Transmittance Code, first released in 1972. This limited initial effort has how progressed to a set of codes and related algorithms (including line-of-sight spectral geometry, direct and scattered radiance and irradiance, non-local thermodynamic equilibrium, etc.) that contain thousands of coding lines, hundreds of subroutines, and improved accuracy, efficiency, and, ultimately, accessibility. This review will include LOWTRAN, HITRAN (atlas of high-resolution molecular spectroscopic data), FASCODE (Fast Atmospheric Signature Code), and MODTRAN (Moderate Resolution Transmittance Code), their permutations, validations, and applications, particularly as related to passive remote sensing and energy deposition.
Creating and Testing Simulation Software
NASA Technical Reports Server (NTRS)
Heinich, Christina M.
2013-01-01
The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.
Antenna pattern control using impedance surfaces
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Liu, Kefeng
1992-01-01
During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.
Optimal high- and low-thrust geocentric transfer
NASA Technical Reports Server (NTRS)
Sackett, L. L.; Edelbaum, T. N.
1974-01-01
A computer code which rapidly calculates time optimal combined high- and low-thrust transfers between two geocentric orbits in the presence of a strong gravitational field has been developed as a mission analysis tool. The low-thrust portion of the transfer can be between any two arbitrary ellipses. There is an option for including the effect of two initial high-thrust impulses which would raise the spacecraft from a low, initially circular orbit to the initial orbit for the low-thrust portion of the transfer. In addition, the effect of a single final impulse after the low-thrust portion of the transfer may be included. The total Delta V for the initial two impulses must be specified as well as the Delta V for the final impulse. Either solar electric or nuclear electric propulsion can be assumed for the low-thrust phase of the transfer.
Simultaneous Heat and Mass Transfer Model for Convective Drying of Building Material
NASA Astrophysics Data System (ADS)
Upadhyay, Ashwani; Chandramohan, V. P.
2018-04-01
A mathematical model of simultaneous heat and moisture transfer is developed for convective drying of building material. A rectangular brick is considered for sample object. Finite-difference method with semi-implicit scheme is used for solving the transient governing heat and mass transfer equation. Convective boundary condition is used, as the product is exposed in hot air. The heat and mass transfer equations are coupled through diffusion coefficient which is assumed as the function of temperature of the product. Set of algebraic equations are generated through space and time discretization. The discretized algebraic equations are solved by Gauss-Siedel method via iteration. Grid and time independent studies are performed for finding the optimum number of nodal points and time steps respectively. A MATLAB computer code is developed to solve the heat and mass transfer equations simultaneously. Transient heat and mass transfer simulations are performed to find the temperature and moisture distribution inside the brick.
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Starnes, James H., Jr.; Newman, James C., Jr.
1995-01-01
NASA is developing a 'tool box' that includes a number of advanced structural analysis computer codes which, taken together, represent the comprehensive fracture mechanics capability required to predict the onset of widespread fatigue damage. These structural analysis tools have complementary and specialized capabilities ranging from a finite-element-based stress-analysis code for two- and three-dimensional built-up structures with cracks to a fatigue and fracture analysis code that uses stress-intensity factors and material-property data found in 'look-up' tables or from equations. NASA is conducting critical experiments necessary to verify the predictive capabilities of the codes, and these tests represent a first step in the technology-validation and industry-acceptance processes. NASA has established cooperative programs with aircraft manufacturers to facilitate the comprehensive transfer of this technology by making these advanced structural analysis codes available to industry.
Testing the Kerr Black Hole Hypothesis Using X-Ray Reflection Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bambi, Cosimo; Nampalliwar, Sourabh; Cárdenas-Avendaño, Alejandro
We present the first X-ray reflection model for testing the assumption that the metric of astrophysical black holes is described by the Kerr solution. We employ the formalism of the transfer function proposed by Cunningham. The calculations of the reflection spectrum of a thin accretion disk are split into two parts: the calculation of the transfer function and the calculation of the local spectrum at any emission point in the disk. The transfer function only depends on the background metric and takes into account all the relativistic effects (gravitational redshift, Doppler boosting, and light bending). Our code computes the transfermore » function for a spacetime described by the Johannsen metric and can easily be extended to any stationary, axisymmetric, and asymptotically flat spacetime. Transfer functions and single line shapes in the Kerr metric are compared to those calculated from existing codes to check that we reach the necessary accuracy. We also simulate some observations with NuSTAR and LAD/eXTP and fit the data with our new model to show the potential capabilities of current and future observations to constrain possible deviations from the Kerr metric.« less
NASA Astrophysics Data System (ADS)
Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.
2006-12-01
Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer project between Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research (INER) for the preliminary assessment of several candidate low-level waste repository sites. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1997-01-01
A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.
Theory and Computation of Optimal Low- and Medium- Thrust Orbit Transfers
NASA Technical Reports Server (NTRS)
Goodson, Troy D.; Chuang, Jason C. H.; Ledsinger, Laura A.
1996-01-01
This report presents new theoretical results which lead to new algorithms for the computation of fuel-optimal multiple-burn orbit transfers of low and medium thrust. Theoretical results introduced herein show how to add burns to an optimal trajectory and show that the traditional set of necessary conditions may be replaced with a much simpler set of equations. Numerical results are presented to demonstrate the utility of the theoretical results and the new algorithms. Two indirect methods from the literature are shown to be effective for the optimal orbit transfer problem with relatively small numbers of burns. These methods are the Minimizing Boundary Condition Method (MBCM) and BOUNDSCO. Both of these methods make use of the first-order necessary conditions exactly as derived by optimal control theory. Perturbations due to Earth's oblateness and atmospheric drag are considered. These perturbations are of greatest interest for transfers that take place between low Earth orbit altitudes and geosynchronous orbit altitudes. Example extremal solutions including these effects and computed by the aforementioned methods are presented. An investigation is also made into a suboptimal multiple-burn guidance scheme. The FORTRAN code developed for this study has been collected together in a package named ORBPACK. ORBPACK's user manual is provided as an appendix to this report.
HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, R.A.; Lowery, P.S.
1987-10-01
HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equationsmore » for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.« less
Nonequilibrium air radiation (Nequair) program: User's manual
NASA Technical Reports Server (NTRS)
Park, C.
1985-01-01
A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.
Transitional flow in thin tubes for space station freedom radiator
NASA Technical Reports Server (NTRS)
Loney, Patrick; Ibrahim, Mounir
1995-01-01
A two dimensional finite volume method is used to predict the film coefficients in the transitional flow region (laminar or turbulent) for the radiator panel tubes. The code used to perform this analysis is CAST (Computer Aided Simulation of Turbulent Flows). The information gathered from this code is then used to augment a Sinda85 model that predicts overall performance of the radiator. A final comparison is drawn between the results generated with a Sinda85 model using the Sinda85 provided transition region heat transfer correlations and the Sinda85 model using the CAST generated data.
Development of a CRAY 1 version of the SINDA program. [thermo-structural analyzer program
NASA Technical Reports Server (NTRS)
Juba, S. M.; Fogerson, P. E.
1982-01-01
The SINDA thermal analyzer program was transferred from the UNIVAC 1110 computer to a CYBER And then to a CRAY 1. Significant changes to the code of the program were required in order to execute efficiently on the CYBER and CRAY. The program was tested on the CRAY using a thermal math model of the shuttle which was too large to run on either the UNIVAC or CYBER. An effort was then begun to further modify the code of SINDA in order to make effective use of the vector capabilities of the CRAY.
Application of numerical methods to heat transfer and thermal stress analysis of aerospace vehicles
NASA Technical Reports Server (NTRS)
Wieting, A. R.
1979-01-01
The paper describes a thermal-structural design analysis study of a fuel-injection strut for a hydrogen-cooled scramjet engine for a supersonic transport, utilizing finite-element methodology. Applications of finite-element and finite-difference codes to the thermal-structural design-analysis of space transports and structures are discussed. The interaction between the thermal and structural analyses has led to development of finite-element thermal methodology to improve the integration between these two disciplines. The integrated thermal-structural analysis capability developed within the framework of a computer code is outlined.
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
NASA Technical Reports Server (NTRS)
Ameri, Ali A.; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.
NASA Technical Reports Server (NTRS)
Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.
Glenn-ht/bem Conjugate Heat Transfer Solver for Large-scale Turbomachinery Models
NASA Technical Reports Server (NTRS)
Divo, E.; Steinthorsson, E.; Rodriquez, F.; Kassab, A. J.; Kapat, J. S.; Heidmann, James D. (Technical Monitor)
2003-01-01
A coupled Boundary Element/Finite Volume Method temperature-forward/flux-hack algorithm is developed for conjugate heat transfer (CHT) applications. A loosely coupled strategy is adopted with each field solution providing boundary conditions for the other in an iteration seeking continuity of temperature and heat flux at the fluid-solid interface. The NASA Glenn Navier-Stokes code Glenn-HT is coupled to a 3-D BEM steady state heat conduction code developed at the University of Central Florida. Results from CHT simulation of a 3-D film-cooled blade section are presented and compared with those computed by a two-temperature approach. Also presented are current developments of an iterative domain decomposition strategy accommodating large numbers of unknowns in the BEM. The blade is artificially sub-sectioned in the span-wise direction, 3-D BEM solutions are obtained in the subdomains, and interface temperatures are averaged symmetrically when the flux is updated while the fluxes are averaged anti-symmetrically to maintain continuity of heat flux when the temperatures are updated. An initial guess for interface temperatures uses a physically-based 1-D conduction argument to provide an effective starting point and significantly reduce iteration. 2-D and 3-D results show the process converges efficiently and offers substantial computational and storage savings. Future developments include a parallel multi-grid implementation of the approach under MPI for computation on PC clusters.
Aerodynamic and heat transfer analysis of the low aspect ratio turbine using a 3D Navier-Stokes code
NASA Astrophysics Data System (ADS)
Choi, D.; Knight, C. J.
1991-06-01
The single-stage, high-pressure ratio Garrett Low Aspect Ratio Turbine (LART) test data obtained in a shock tunnel are employed as a basis for evaluating a new three-dimensional Navier Stokes code based on the O-H grid system. It uses Coakley's two-equation turbulence modeling with viscous sublayer resolution. For the nozzle guide vanes, calculations were made based on two grid zones: an O-grid zone wrapping around airfoil and an H-grid zone outside of the O-grid zone, including the regions upstream of the leadig edge and downstream of the trailing edge. For the rotor blade row, a third O-grid zone was added for the tip-gap region leakage flow. The computational results compare well with experiment. These comparisons include heat transfer distributions on the airfoils and end-walls. The leakage flow through the tip-gap clearance is well resolved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikotekar, Anand A; Vallee, Geoffroy R; Naughton III, Thomas J
2008-01-01
The topic of system-level virtualization has recently begun to receive interest for high performance computing (HPC). This is in part due to the isolation and encapsulation offered by the virtual machine. These traits enable applications to customize their environments and maintain consistent software configurations in their virtual domains. Additionally, there are mechanisms that can be used for fault tolerance like live virtual machine migration. Given these attractive benefits to virtualization, a fundamental question arises, how does this effect my scientific application? We use this as the premise for our paper and observe a real-world scientific code running on a Xenmore » virtual machine. We studied the effects of running a radiative transfer simulation, Hydrolight, on a virtual machine. We discuss our methodology and report observations regarding the usage of virtualization with this application.« less
Hydrodynamic models of a cepheid atmosphere. Ph.D. Thesis - Maryland Univ., College Park
NASA Technical Reports Server (NTRS)
Karp, A. H.
1974-01-01
A method for including the solution of the transfer equation in a standard Henyey type hydrodynamic code was developed. This modified Henyey method was used in an implicit hydrodynamic code to compute deep envelope models of a classical Cepheid with a period of 12(d) including radiative transfer effects in the optically thin zones. It was found that the velocity gradients in the atmosphere are not responsible for the large microturbulent velocities observed in Cepheids but may be responsible for the occurrence of supersonic microturbulence. It was found that the splitting of the cores of the strong lines is due to shock induced temperature inversions in the line forming region. The adopted light, color, and velocity curves were used to study three methods frequently used to determine the mean radii of Cepheids. It is concluded that an accuracy of 10% is possible only if high quality observations are used.
Gravitational tree-code on graphics processing units: implementation in CUDA
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-05-01
We present a new very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way we achieve a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s. It takes about a second to compute forces on a million particles with an opening angle of θ ≈ 0.5. The code has a convenient user interface and is freely available for use. http://castle.strw.leidenuniv.nl/software/octgrav.html
NASA Technical Reports Server (NTRS)
Rarig, P. L.
1980-01-01
A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.
A practical VEP-based brain-computer interface.
Wang, Yijun; Wang, Ruiping; Gao, Xiaorong; Hong, Bo; Gao, Shangkai
2006-06-01
This paper introduces the development of a practical brain-computer interface at Tsinghua University. The system uses frequency-coded steady-state visual evoked potentials to determine the gaze direction of the user. To ensure more universal applicability of the system, approaches for reducing user variation on system performance have been proposed. The information transfer rate (ITR) has been evaluated both in the laboratory and at the Rehabilitation Center of China, respectively. The system has been proved to be applicable to > 90% of people with a high ITR in living environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C.I.; Sha, W.T.; Kasza, K.E.
As a result of the uncertainties in the understanding of the influence of thermal-buoyancy effects on the flow and heat transfer in Liquid Metal Fast Breeder Reactor heat exchangers and steam generators under off-normal operating conditions, an extensive experimental program is being conducted at Argonne National Laboratory to eliminate these uncertainties. Concurrently, a parallel analytical effort is also being pursued to develop a three-dimensional transient computer code (COMMIX-IHX) to study and predict heat exchanger performance under mixed, forced, and free convection conditions. This paper presents computational results from a heat exchanger simulation and compares them with the results from amore » test case exhibiting strong thermal buoyancy effects. Favorable agreement between experiment and code prediction is obtained.« less
Continuum Absorption Coefficient of Atoms and Ions
NASA Technical Reports Server (NTRS)
Armaly, B. F.
1979-01-01
The rate of heat transfer to the heat shield of a Jupiter probe has been estimated to be one order of magnitude higher than any previously experienced in an outer space exploration program. More than one-third of this heat load is due to an emission of continuum radiation from atoms and ions. The existing computer code for calculating the continuum contribution to the total load utilizes a modified version of Biberman's approximate method. The continuum radiation absorption cross sections of a C - H - O - N ablation system were examined in detail. The present computer code was evaluated and updated by being compared with available exact and approximate calculations and correlations of experimental data. A detailed calculation procedure, which can be applied to other atomic species, is presented. The approximate correlations can be made to agree with the available exact and experimental data.
Parallelising a molecular dynamics algorithm on a multi-processor workstation
NASA Astrophysics Data System (ADS)
Müller-Plathe, Florian
1990-12-01
The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
EOSPEC: a complementary toolbox for MODTRAN calculations
NASA Astrophysics Data System (ADS)
Dion, Denis
2016-09-01
For more than a decade, Defence Research and Development Canada (DRDC) has been developing a Library of computer models for the calculations of atmospheric effects on EO-IR sensor performances. The Library, called EOSPEC-LIB (EO-IR Sensor PErformance Computation LIBrary) has been designed as a complement to MODTRAN, the radiative transfer code developed by the Air Force Research Laboratory and Spectral Science Inc. in the USA. The Library comprises modules for the definition of the atmospheric conditions, including aerosols, and provides modules for the calculation of turbulence and fine refraction effects. SMART (Suite for Multi-resolution Atmospheric Radiative Transfer), a key component of EOSPEC, allows one to perform fast computations of transmittances and radiances using MODTRAN through a wide-band correlated-k computational approach. In its most recent version, EOSPEC includes a MODTRAN toolbox whose functions help generate in a format compatible to MODTRAN 5 and 6 atmospheric and aerosol profiles, user-defined refracted optical paths and inputs for configuring the MODTRAN sea radiance (BRDF) model. The paper gives an overall description of the EOSPEC features and capacities. EOSPEC provides augmented capabilities for computations in the lower atmosphere, and for computations in maritime environments.
Nonlinear heat transfer and structural analyses of SSME turbine blades
NASA Technical Reports Server (NTRS)
Abdul-Aziz, A.; Kaufman, A.
1987-01-01
Three-dimensional nonlinear finite-element heat transfer and structural analyses were performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine (SSME). Directionally solidified (DS) MAR-M 246 material properties were considered for the analyses. Analytical conditions were based on a typical test stand engine cycle. Blade temperature and stress-strain histories were calculated using MARC finite-element computer code. The study was undertaken to assess the structural response of an SSME turbine blade and to gain greater understanding of blade damage mechanisms, convective cooling effects, and the thermal-mechanical effects.
MAVIS III -- A Windows 95/NT Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardwick, M.F.
1997-12-01
MAVIS (Modeling and Analysis of Explosive Valve Interactions) is a computer program that simulates operation of explosively actuated valve. MAVIS was originally written in Fortran in the mid 1970`s and was primarily run on the Sandia Vax computers in use through the early 1990`s. During the mid to late 1980`s MAVIS was upgraded to include the effects of plastic deformation and it became MAVIS II. When the Vax computers were retired, the Gas Transfer System (GTS) Development Department ported the code to the Macintosh and PC platforms, where it ran as a simple console application. All graphical output was lostmore » during these ports. GTS code developers recently completed an upgrade that provides a Windows 95/NT MAVIS application and restores all of the original graphical output. This upgrade is called MAVIS III version 1.0. This report serves both as a user`s manual for MAVIS III v 1.0 and as a general software development reference.« less
Nonlinear Transient Problems Using Structure Compatible Heat Transfer Code
NASA Technical Reports Server (NTRS)
Hou, Gene
2000-01-01
The report documents the recent effort to enhance a transient linear heat transfer code so as to solve nonlinear problems. The linear heat transfer code was originally developed by Dr. Kim Bey of NASA Largely and called the Structure-Compatible Heat Transfer (SCHT) code. The report includes four parts. The first part outlines the formulation of the heat transfer problem of concern. The second and the third parts give detailed procedures to construct the nonlinear finite element equations and the required Jacobian matrices for the nonlinear iterative method, Newton-Raphson method. The final part summarizes the results of the numerical experiments on the newly enhanced SCHT code.
The feasibility of QR-code prescription in Taiwan.
Lin, C-H; Tsai, F-Y; Tsai, W-L; Wen, H-W; Hu, M-L
2012-12-01
An ideal Health Care Service is a service system that focuses on patients. Patients in Taiwan have the freedom to fill their prescriptions at any pharmacies contracted with National Health Insurance. Each of these pharmacies uses its own computer system. So far, there are at least ten different systems on the market in Taiwan. To transmit the prescription information from the hospital to the pharmacy accurately and efficiently presents a great issue. This study consisted of two-dimensional applications using a QR-code to capture Patient's identification and prescription information from the hospitals as well as using a webcam to read the QR-code and transfer all data to the pharmacy computer system. Two hospitals and 85 community pharmacies participated in the study. During the trial, all participant pharmacies appraised highly of the accurate transmission of the prescription information. The contents in QR-code prescriptions from Taipei area were picked up efficiently and accurately in pharmacies at Taichung area (middle Taiwan) without software system limit and area limitation. The QR-code device received a patent (No. M376844, March 2010) from Intellectual Property Office Ministry of Economic Affair, China. Our trial has proven that QR-code prescription can provide community pharmacists an efficient, accurate and inexpensive device to digitalize the prescription contents. Consequently, pharmacists can offer better quality of pharmacy service to patients. © 2012 Blackwell Publishing Ltd.
Fundamental modeling of pulverized coal and coal-water slurry combustion in a gas turbine combustor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatwani, A.; Turan, A.; Hals, F.
1988-01-01
This work describes the essential features of a coal combustion model which is incorporated into a three-dimensional, steady-state, two-phase, turbulent, reactive flow code. The code is a modified and advanced version of INTERN code originally developed at Imperial College which has gone through many stages of development and validation. Swithenbank et al have reported spray combustion model results for an experimental can combustor. The code has since then been modified by and made public under a US Army program. A number of code modifications and improvements have been made at ARL. The earlier version of code was written for amore » small CDC machine which relied on frequent disk/memory transfer and overlay features to carry the computations resulting in loss of computational speed. These limitations have now been removed. For spray applications, the fuel droplet vaporization generates gaseous fuel of uniform composition; hence the earlier formulation relied upon the use of conserved scalar approximation to reduce the number of species equations to be solved. In applications related to coal fuel, coal pyrolysis leads to the formation of at least two different gaseous fuels and a solid fuel of different composition. The authors have therefore removed the conserved scalar formulation for the sake of generality and easy adaptability to complex fuel situations.« less
Self-Taught Low-Rank Coding for Visual Learning.
Li, Sheng; Li, Kang; Fu, Yun
2018-03-01
The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data. In this paper, we focus on building a self-taught coding framework, which can effectively utilize the rich low-level pattern information abstracted from the auxiliary domain, in order to characterize the high-level structural information in the target domain. By leveraging a high quality dictionary learned across auxiliary and target domains, the proposed approach learns expressive codings for the samples in the target domain. Since many types of visual data have been proven to contain subspace structures, a low-rank constraint is introduced into the coding objective to better characterize the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be formulated as a nonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization-minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are derived. Extensive experiments on five benchmark data sets demonstrate the effectiveness of our approach.
Burner liner thermal/structural load modeling: TRANCITS program user's manual
NASA Technical Reports Server (NTRS)
Maffeo, R.
1985-01-01
Transfer Analysis Code to Interface Thermal/Structural Problems (TRANCITS) is discussed. The TRANCITS code satisfies all the objectives for transferring thermal data between heat transfer and structural models of combustor liners and it can be used as a generic thermal translator between heat transfer and stress models of any component, regardless of the geometry. The TRANCITS can accurately and efficiently convert the temperature distributions predicted by the heat transfer programs to those required by the stress codes. It can be used for both linear and nonlinear structural codes and can produce nodal temperatures, elemental centroid temperatures, or elemental Gauss point temperatures. The thermal output of both the MARC and SINDA heat transfer codes can be interfaced directly with TRANCITS, and it will automatically produce stress model codes formatted for NASTRAN and MARC. Any thermal program and structural program can be interfaced by using the neutral input and output forms supported by TRANCITS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.E.; Roussin, R.W.; Gilpin, H.
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
Discrete Data Transfer Technique for Fluid-Structure Interaction
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2007-01-01
This paper presents a general three-dimensional algorithm for data transfer between dissimilar meshes. The algorithm is suitable for applications of fluid-structure interaction and other high-fidelity multidisciplinary analysis and optimization. Because the algorithm is independent of the mesh topology, we can treat structured and unstructured meshes in the same manner. The algorithm is fast and accurate for transfer of scalar or vector fields between dissimilar surface meshes. The algorithm is also applicable for the integration of a scalar field (e.g., coefficients of pressure) on one mesh and injection of the resulting vectors (e.g., force vectors) onto another mesh. The author has implemented the algorithm in a C++ computer code. This paper contains a complete formulation of the algorithm with a few selected results.
NASA Astrophysics Data System (ADS)
Reynolds, J. C.; Schroeder, J. A.
1993-03-01
The FORTRAN library that the NOAA Wave Propagation Laboratory (WPL) developed to perform radiative transfer calculations for an upward-looking microwave radiometer is described. Although the theory and algorithms have been used for many years in WPL radiometer research, the Radiative Transfer Equation (RTE) software has combined them into a toolbox that is portable, readable, application independent, and easy to update. RTE has been optimized for the UNIX environment. However, the FORTRAN source code can be compiled on any platform that provides a Standard FORTRAN 77 compiler. RTE allows a user to do cloud modeling, calibrate radiometers, simulate hypothetical radiometer systems, develop retrieval techniques, and compute weighting functions. The radiative transfer model used is valid for channel frequencies below 1000 GHz in clear conditions and for frequencies below 100 GHz when clouds are present.
FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations
NASA Astrophysics Data System (ADS)
Ding, Jianmin; Lyczkowski, R. W.; Burge, S. W.
1993-02-01
A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.
Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.
NASA Astrophysics Data System (ADS)
Stossel, Bryan Joseph
1995-01-01
Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.
TRAC-PF1 code verification with data from the OTIS test facility. [Once-Through Intergral System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childerson, M.T.; Fujita, R.K.
1985-01-01
A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code wasmore » successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer.« less
Internal fluid mechanics research on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.
1988-01-01
The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.
Theoretical research program to study chemical reactions in AOTV bow shock tubes
NASA Technical Reports Server (NTRS)
Taylor, P.
1986-01-01
Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.
Postflight aerothermodynamic analysis of Pegasus(tm) using computational fluid dynamic techniques
NASA Technical Reports Server (NTRS)
Kuhn, Gary D.
1992-01-01
The objective was to validate the computational capability of the NASA Ames Navier-Stokes code, F3D, for flows at high Mach numbers using comparison flight test data from the Pegasus (tm) air launched, winged space booster. Comparisons were made with temperature and heat fluxes estimated from measurements on the wing surfaces and wing-fuselage fairings. Tests were conducted for solution convergence, sensitivity to grid density, and effects of distributing grid points to provide high density near temperature and heat flux sensors. The measured temperatures were from sensors embedded in the ablating thermal protection system. Surface heat fluxes were from plugs fabricated of highly insulative, nonablating material, and mounted level with the surface of the surrounding ablative material. As a preflight design tool, the F3D code produces accurate predictions of heat transfer and other aerodynamic properties, and it can provide detailed data for assessment of boundary layer separation, shock waves, and vortex formation. As a postflight analysis tool, the code provides a way to clarify and interpret the measured results.
NASA Technical Reports Server (NTRS)
Tomsik, Thomas M.
1994-01-01
The design of coolant passages in regeneratively cooled thrust chambers is critical to the operation and safety of a rocket engine system. Designing a coolant passage is a complex thermal and hydraulic problem requiring an accurate understanding of the heat transfer between the combustion gas and the coolant. Every major rocket engine company has invested in the development of thrust chamber computer design and analysis tools; two examples are Rocketdyne's REGEN code and Aerojet's ELES program. In an effort to augment current design capabilities for government and industry, the NASA Lewis Research Center is developing a computer model to design coolant passages for advanced regeneratively cooled thrust chambers. The RECOP code incorporates state-of-the-art correlations, numerical techniques and design methods, certainly minimum requirements for generating optimum designs of future space chemical engines. A preliminary version of the RECOP model was recently completed and code validation work is in progress. This paper introduces major features of RECOP and compares the analysis to design points for the first test case engine; the Pratt & Whitney RL10A-3-3A thrust chamber.
NASA Technical Reports Server (NTRS)
Bartlett, E. P.; Morse, H. L.; Tong, H.
1971-01-01
Procedures and methods for predicting aerothermodynamic heating to delta orbiter shuttle vehicles were reviewed. A number of approximate methods were found to be adequate for large scale parameter studies, but are considered inadequate for final design calculations. It is recommended that final design calculations be based on a computer code which accounts for nonequilibrium chemistry, streamline spreading, entropy swallowing, and turbulence. It is further recommended that this code be developed with the intent that it can be directly coupled with an exact inviscid flow field calculation when the latter becomes available. A nonsimilar, equilibrium chemistry computer code (BLIMP) was used to evaluate the effects of entropy swallowing, turbulence, and various three dimensional approximations. These solutions were compared with available wind tunnel data. It was found study that, for wind tunnel conditions, the effect of entropy swallowing and three dimensionality are small for laminar boundary layers but entropy swallowing causes a significant increase in turbulent heat transfer. However, it is noted that even small effects (say, 10-20%) may be important for the shuttle reusability concept.
NASA Astrophysics Data System (ADS)
Burnett, W.
2016-12-01
The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will provide the DoD with future computing assets to initially operate the N-ESPC in 2019. This talk will further describe how DoD's HPCMP will ensure N-ESPC becomes operational, efficiently and effectively, using next-generation high performance computing.
NASA Technical Reports Server (NTRS)
Lin, S. J.; Yang, R. J.; Chang, James L. C.; Kwak, D.
1987-01-01
The purpose of this study is to examine in detail incompressible laminar and turbulent flows inside the oxidizer side Hot Gas Manifold of the Space Shuttle Main Engine. To perform this study, an implicit finite difference code cast in general curvilinear coordinates is further developed. The code is based on the method of pseudo-compressibility and utilize ADI or implicit approximate factorization algorithm to achieve computational efficiency. A multiple-zone method is developed to overcome the complexity of the geometry. In the present study, the laminar and turbulent flows in the oxidizer side Hot Gas Manifold have been computed. The study reveals that: (1) there exists large recirculation zones inside the bowl if no vanes are present; (2) strong secondary flows are observed in the transfer tube; and (3) properly shaped and positioned guide vanes are effective in eliminating flow separation.
Thermal Analysis on Plume Heating of the Main Engine on the Crew Exploration Vehicle Service Module
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.; Yuko, James R.
2007-01-01
The crew exploration vehicle (CEV) service module (SM) main engine plume heating is analyzed using multiple numerical tools. The chemical equilibrium compositions and applications (CEA) code is used to compute the flow field inside the engine nozzle. The plume expansion into ambient atmosphere is simulated using an axisymmetric space-time conservation element and solution element (CE/SE) Euler code, a computational fluid dynamics (CFD) software. The thermal analysis including both convection and radiation heat transfers from the hot gas inside the engine nozzle and gas radiation from the plume is performed using Thermal Desktop. Three SM configurations, Lockheed Martin (LM) designed 604, 605, and 606 configurations, are considered. Design of multilayer insulation (MLI) for the stowed solar arrays, which is subject to plume heating from the main engine, among the passive thermal control system (PTCS), are proposed and validated.
NASA Astrophysics Data System (ADS)
Ambarita, H.; Ronowikarto, A. D.; Siregar, R. E. T.; Setyawan, E. Y.
2018-03-01
To reduce heat loses in a flat plate solar collector, double glasses cover is employed. Several studies show that the heat loss from the glass cover is still very significant in comparison with other losses. Here, double glasses cover with attached fins is proposed. In the present work, the fluid flow and heat transfer characteristics of the enclosure between the double glass cover are investigated numerically. The objective is to examine the effect of the fin to the heat transfer rate of the cover. Two-dimensional governing equations are developed. The governing equations and the boundary conditions are solved using commercial Computational Fluid Dynamics code. The fluid flow and heat transfer characteristics are plotted, and numerical results are compared with empirical correlation. The results show that the presence of the fin strongly affects the fluid flow and heat transfer characteristics. The fin can reduce the heat transfer rate up to 22.42% in comparison with double glasses cover without fins.
Intrasystem Analysis Program (IAP) code summaries
NASA Astrophysics Data System (ADS)
Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.
1983-05-01
This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
Experimental and Computational Analysis of Unidirectional Flow Through Stirling Engine Heater Head
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Dyson, Rodger W.; Tew, Roy C.; Demko, Rikako
2006-01-01
A high efficiency Stirling Radioisotope Generator (SRG) is being developed for possible use in long-duration space science missions. NASA s advanced technology goals for next generation Stirling convertors include increasing the Carnot efficiency and percent of Carnot efficiency. To help achieve these goals, a multi-dimensional Computational Fluid Dynamics (CFD) code is being developed to numerically model unsteady fluid flow and heat transfer phenomena of the oscillating working gas inside Stirling convertors. In the absence of transient pressure drop data for the zero mean oscillating multi-dimensional flows present in the Technology Demonstration Convertors on test at NASA Glenn Research Center, unidirectional flow pressure drop test data is used to compare against 2D and 3D computational solutions. This study focuses on tracking pressure drop and mass flow rate data for unidirectional flow though a Stirling heater head using a commercial CFD code (CFD-ACE). The commercial CFD code uses a porous-media model which is dependent on permeability and the inertial coefficient present in the linear and nonlinear terms of the Darcy-Forchheimer equation. Permeability and inertial coefficient were calculated from unidirectional flow test data. CFD simulations of the unidirectional flow test were validated using the porous-media model input parameters which increased simulation accuracy by 14 percent on average.
Monochromatic, Rosseland mean, and Planck mean opacity routine
NASA Astrophysics Data System (ADS)
Semenov, D.
2006-11-01
Several FORTRAN77 codes were developed to compute frequency-dependent, Rosseland and Planck mean opacities of gas and dust in protoplanetary disks. The opacities can be computed for an ensemble of dust grains having various compositions (ices, silicates, organics, etc), sizes, topologies (homogeneous/composite aggregates, homogeneous/layered/composite spheres, etc.), porosities, and dust-to-gas ratio. Several examples are available. In addition, a very fast opacity routine to be used in modeling of the radiative transfer in hydro simulations of disks is available upon request (10^8 routine calls require about 30s on Pentium 4 3.0GHz).
Quantum Mechanical Modeling of Ballistic MOSFETs
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan (Technical Monitor)
2001-01-01
The objective of this project was to develop theory, approximations, and computer code to model quasi 1D structures such as nanotubes, DNA, and MOSFETs: (1) Nanotubes: Influence of defects on ballistic transport, electro-mechanical properties, and metal-nanotube coupling; (2) DNA: Model electron transfer (biochemistry) and transport experiments, and sequence dependence of conductance; and (3) MOSFETs: 2D doping profiles, polysilicon depletion, source to drain and gate tunneling, understand ballistic limit.
Quantum Engineering of Dynamical Gauge Fields on Optical Lattices
2016-07-08
opens the door for exciting new research directions, such as quantum simulation of the Schwinger model and of non-Abelian models. (a) Papers...exact blocking formulas from the TRG formulation of the transfer matrix. The second is a worm algorithm. The particle number distributions obtained...a fact that can be explained by an approximate particle- hole symmetry. We have also developed a computer code suite for simulating the Abelian
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
Magnetic Control of Hypersonic Flow
NASA Astrophysics Data System (ADS)
Poggie, Jonathan; Gaitonde, Datta
2000-11-01
Electromagnetic control is an appealing possibility for mitigating the thermal loads that occur in hypersonic flight, in particular for the case of atmospheric entry. There was extensive research on this problem between about 1955 and 1970,(M. F. Romig, ``The Influence of Electric and Magnetic Fields on Heat Transfer to Electrically Conducting Fluids,'' \\underlineAdvances In Heat Transfer), Vol. 1, Academic Press, NY, 1964. and renewed interest has arisen due to developments in the technology of super-conducting magnets and the understanding of the physics of weakly-ionized, non-equilibrium plasmas. In order to examine the physics of this problem, and to evaluate the practicality of electromagnetic control in hypersonic flight, we have developed a computer code to solve the three-dimensional, non-ideal magnetogasdynamics equations. We have applied the code to the problem of magnetically-decelerated hypersonic flow over a sphere, and observed a reduction, with an applied dipole field, in heat flux and skin friction near the nose of the body, as well as an increase in shock standoff distance. The computational results compare favorably with the analytical predictions of Bush.(W. B. Bush, ``Magnetohydrodynamic-Hypersonic Flow Past a Blunt Body'', Journal of the Aero/Space Sciences, Vol. 25, No. 11, 1958; ``The Stagnation-Point Boundary Layer in the Presence of an Applied Magnetic Field'', Vol. 28, No. 8, 1961.)
Determination of Thermal State of Charge in Solar Heat Receivers
NASA Technical Reports Server (NTRS)
Glakpe, E. K.; Cannon, J. N.; Hall, C. A., III; Grimmett, I. W.
1996-01-01
The research project at Howard University seeks to develop analytical and numerical capabilities to study heat transfer and fluid flow characteristics, and the prediction of the performance of solar heat receivers for space applications. Specifically, the study seeks to elucidate the effects of internal and external thermal radiation, geometrical and applicable dimensionless parameters on the overall heat transfer in space solar heat receivers. Over the last year, a procedure for the characterization of the state-of-charge (SOC) in solar heat receivers for space applications has been developed. By identifying the various factors that affect the SOC, a dimensional analysis is performed resulting in a number of dimensionless groups of parameters. Although not accomplished during the first phase of the research, data generated from a thermal simulation program can be used to determine values of the dimensionless parameters and the state-of-charge and thereby obtain a correlation for the SOC. The simulation program selected for the purpose is HOTTube, a thermal numerical computer code based on a transient time-explicit, axisymmetric model of the total solar heat receiver. Simulation results obtained with the computer program are presented the minimum and maximum insolation orbits. In the absence of any validation of the code with experimental data, results from HOTTube appear reasonable qualitatively in representing the physical situations modeled.
NASA Astrophysics Data System (ADS)
Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.; Mlawer, E.; Morcrette, J.-J.; O'Hirok, W.; Räisänen, P.; Ramaswamy, V.; Ritter, B.; Rozanov, E.; Schlesinger, M.; Shibata, K.; Sporyshev, P.; Sun, Z.; Wendisch, M.; Wood, N.; Yang, F.
2003-08-01
The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.
Staging memory for massively parallel processor
NASA Technical Reports Server (NTRS)
Batcher, Kenneth E. (Inventor)
1988-01-01
The invention herein relates to a computer organization capable of rapidly processing extremely large volumes of data. A staging memory is provided having a main stager portion consisting of a large number of memory banks which are accessed in parallel to receive, store, and transfer data words simultaneous with each other. Substager portions interconnect with the main stager portion to match input and output data formats with the data format of the main stager portion. An address generator is coded for accessing the data banks for receiving or transferring the appropriate words. Input and output permutation networks arrange the lineal order of data into and out of the memory banks.
Multiline Transfer and the Dynamics of Stellar Winds
NASA Technical Reports Server (NTRS)
Abbott, D. C.; Lucy, L. B.
1985-01-01
A Monte Carlo technique for treating multiline transfer in stellar winds is described. With a line list containing many thousands of transitions and with fairly realistic treatments of ionization, excitation and line formation, the resulting code allows the dynamic effects of overlapping lines the investigation of and provides the means to directly synthesize the complete spectrum of a star and its wind. It is found that the computed mass loss rate for data Puppis agrees with the observed rate. The synthesized spectrum of zeta Puppis also agrees with observational data. This confirms that line driving is the dominant acceleration mechanism in this star's wind.
Turbulence Modeling: Progress and Future Outlook
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.; Huang, George P.
1996-01-01
Progress in the development of the hierarchy of turbulence models for Reynolds-averaged Navier-Stokes codes used in aerodynamic applications is reviewed. Steady progress is demonstrated, but transfer of the modeling technology has not kept pace with the development and demands of the computational fluid dynamics (CFD) tools. An examination of the process of model development leads to recommendations for a mid-course correction involving close coordination between modelers, CFD developers, and application engineers. In instances where the old process is changed and cooperation enhanced, timely transfer is realized. A turbulence modeling information database is proposed to refine the process and open it to greater participation among modeling and CFD practitioners.
Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2012-01-01
To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.
Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system
NASA Technical Reports Server (NTRS)
Slater, P. N.; Palmer, J. M. (Principal Investigator)
1985-01-01
The results of analyses of Thematic Mapper (TM) images acquired on July 8 and October 28, 1984, and of a check of the calibration of the 1.22-m integrating sphere at Santa Barbara Research Center (SBRC) are described. The results obtained from the in-flight calibration attempts disagree with the pre-flight calibrations for bands 2 and 4. Considerable effort was expended in an attempt to explain the disagreement. The difficult point to explain is that the difference between the radiances predicted by the radiative transfer code (the code radiances) and the radiances predicted by the preflight calibration (the pre-flight radiances) fluctuate with spectral band. Because the spectral quantities measured at White Sands show little change with spectral band, these fluctuations are not anticipated. Analyses of other targets at White Sands such as clouds, cloud shadows, and water surfaces tend to support the pre-flight and internal calibrator calibrations. The source of the disagreement has not been identified. It could be due to: (1) a computational error in the data reduction; (2) an incorrect assumption in the input to the radiative transfer code; or (3) incorrect operation of the field equipment.
On Favorable Thermal Fields for Detached Bridgman Growth
NASA Technical Reports Server (NTRS)
Stelian, Carmen; Volz, Martin P.; Derby, Jeffrey J.
2009-01-01
The thermal fields of two Bridgman-like configurations, representative of real systems used in prior experiments for the detached growth of CdTe and Ge crystals, are studied. These detailed heat transfer computations are performed using the CrysMAS code and expand upon our previous analyses [14] that posited a new mechanism involving the thermal field and meniscus position to explain stable conditions for dewetted Bridgman growth. Computational results indicate that heat transfer conditions that led to successful detached growth in both of these systems are in accordance with our prior assertion, namely that the prevention of crystal reattachment to the crucible wall requires the avoidance of any undercooling of the melt meniscus during the growth run. Significantly, relatively simple process modifications that promote favorable thermal conditions for detached growth may overcome detrimental factors associated with meniscus shape and crucible wetting. Thus, these ideas may be important to advance the practice of detached growth for many materials.
MATCHED-INDEX-OF-REFRACTION FLOW FACILITY FOR FUNDAMENTAL AND APPLIED RESEARCH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piyush Sabharwall; Carl Stoots; Donald M. McEligot
2014-11-01
Significant challenges face reactor designers with regard to thermal hydraulic design and associated modeling for advanced reactor concepts. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. The matched index of refraction (MIR) flow facility at Idaho National Laboratory (INL) has a unique capability to contribute to the development of validated computational fluid dynamics (CFD) codes through the use of state-of-the-art optical measurement techniques, such as Laser Doppler Velocimetry (LDV) andmore » Particle Image Velocimetry (PIV). PIV is a non-intrusive velocity measurement technique that tracks flow by imaging the movement of small tracer particles within a fluid. At the heart of a PIV calculation is the cross correlation algorithm, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. Generally, the displacement is indicated by the location of the largest peak. To quantify these measurements accurately, sophisticated processing algorithms correlate the locations of particles within the image to estimate the velocity (Ref. 1). Prior to use with reactor deign, the CFD codes have to be experimentally validated, which requires rigorous experimental measurements to produce high quality, multi-dimensional flow field data with error quantification methodologies. Computational thermal hydraulic codes solve only a piece of the core. There is a need for a whole core dynamics system code with local resolution to investigate and understand flow behavior with all the relevant physics and thermo-mechanics. Computational techniques with supporting test data may be needed to address the heat transfer from the fuel to the coolant during the transition from turbulent to laminar flow, including the possibility of an early laminarization of the flow (Refs. 2 and 3) (laminarization is caused when the coolant velocity is theoretically in the turbulent regime, but the heat transfer properties are indicative of the coolant velocity being in the laminar regime). Such studies are complicated enough that computational fluid dynamics (CFD) models may not converge to the same conclusion. Thus, experimentally scaled thermal hydraulic data with uncertainties should be developed to support modeling and simulation for verification and validation activities. The fluid/solid index of refraction matching technique allows optical access in and around geometries that would otherwise be impossible while the large test section of the INL system provides better spatial and temporal resolution than comparable facilities. Benchmark data for assessing computational fluid dynamics can be acquired for external flows, internal flows, and coupled internal/external flows for better understanding of physical phenomena of interest. The core objective of this study is to describe MIR and its capabilities, and mention current development areas for uncertainty quantification, mainly the uncertainty surface method and cross-correlation method. Using these methods, it is anticipated to establish a suitable approach to quantify PIV uncertainty for experiments performed in the MIR.« less
NASA Astrophysics Data System (ADS)
Steyn, Gideon; Vermeulen, Christiaan
2018-05-01
An experiment was designed to study the effect of the jet direction on convective heat-transfer coefficients in single-jet gas cooling of a small heated surface, such as typically induced by an accelerated ion beam on a thin foil or specimen. The hot spot was provided using a small electrically heated plate. Heat-transfer calculations were performed using simple empirical methods based on dimensional analysis as well as by means of an advanced computational fluid dynamics (CFD) code. The results provide an explanation for the observed turbulent cooling of a double-foil, Havar beam window with fast-flowing helium, located on a target station for radionuclide production with a 66 MeV proton beam at a cyclotron facility.
Effect of partial heating at mid of vertical plate adjacent to porous medium
NASA Astrophysics Data System (ADS)
Mulla, Mohammed Fahimuddin; Pallan, Khalid. M.; Al-Rashed, A. A. A. A.
2018-05-01
Heat and mass transfer in porous medium due to heating of vertical plate at mid-section is analyzed for various physical parameters. The heat and mass transfer in porous medium is modeled with the help of momentum, energy and concentration equations in terms of non-dimensional partial differential equations. The partial differential equations are converted into simpler form of algebraic equations with the help of finite element method. A computer code is developed to assemble the matrix form of algebraic equations into global matrices and then to solve them in an iterative manner to obtain the temperature, concentration and streamline distribution inside the porous medium. It is found that the heat transfer behavior of porous medium heated at middle section is considerably different from other cases.
MIL-STD-1553B Marconi LSI chip set in a remote terminal application
NASA Astrophysics Data System (ADS)
Dimarino, A.
1982-11-01
Marconi Avionics is utilizing the MIL-STD-1553B LSI Chip Set in the SCADC Air Data Computer application to perform all of the required remote terminal MIL-STD-1553B protocol functions. Basic components of the RTU are the dual redundant chip set, CT3231 Transceivers, 256 x 16 RAM and a Z8002 microprocessor. Basic transfers are to/from the RAM command of the bus controller or Z8002 processor. During transfers from the processor to the RAM, the chip set busy bit is set for a period not exceeding 250 microseconds. When the transfer is complete, the busy bit is released and transfers to the data bus occur on command. The LSI Chip Set word count lines are used to locate each data word in the local memory and 4 mode codes are used in the application: reset remote terminal, transmit status word, transmitter shut-down, and override transmitter shutdown.
Modeling Film-Coolant Flow Characteristics at the Exit of Shower-Head Holes
NASA Technical Reports Server (NTRS)
Garg, Vijay K.; Gaugler, R. E. (Technical Monitor)
2000-01-01
The coolant flow characteristics at the hole exits of a film-cooled blade are derived from an earlier analysis where the hole pipes and coolant plenum were also discretized. The blade chosen is the VKI rotor with three staggered rows of shower-head holes. The present analysis applies these flow characteristics at the shower-head hole exits. A multi-block three-dimensional Navier-Stokes code with Wilcox's k-omega model is used to compute the heat transfer coefficient on the film-cooled turbine blade. A reasonably good comparison with the experimental data as well as with the more complete earlier analysis where the hole pipes and coolant plenum were also gridded is obtained. If the 1/7th power law is assumed for the coolant flow characteristics at the hole exits, considerable differences in the heat transfer coefficient on the blade surface, specially in the leading-edge region, are observed even though the span-averaged values of h (heat transfer coefficient based on T(sub o)-T(sub w)) match well with the experimental data. This calls for span-resolved experimental data near film-cooling holes on a blade for better validation of the code.
Radiative Heat Transfer and Turbulence-Radiation Interactions in a Heavy-Duty Diesel Engine
NASA Astrophysics Data System (ADS)
Paul, C.; Sircar, A.; Ferreyro, S.; Imren, A.; Haworth, D. C.; Roy, S.; Ge, W.; Modest, M. F.
2016-11-01
Radiation in piston engines has received relatively little attention to date. Recently, it is being revisited in light of current trends towards higher operating pressures and higher levels of exhaust-gas recirculation, both of which enhance molecular gas radiation. Advanced high-efficiency engines also are expected to function closer to the limits of stable operation, where even small perturbations to the energy balance can have a large influence on system behavior. Here several different spectral radiation property models and radiative transfer equation (RTE) solvers have been implemented in an OpenFOAM-based engine CFD code, and simulations have been performed for a heavy-duty diesel engine. Differences in computed temperature fields, NO and soot levels, and wall heat transfer rates are shown for different combinations of spectral models and RTE solvers. The relative importance of molecular gas radiation versus soot radiation is examined. And the influence of turbulence-radiation interactions is determined by comparing results obtained using local mean values of composition and temperature to compute radiative emission and absorption with those obtained using a particle-based transported probability density function method. DOE, NSF.
Radiative Heat Transfer modelling in a Heavy-Duty Diesel Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Chandan; Sircar, Arpan; Ferreyro-Fernandez, Sebastian
Detailed radiation modelling in piston engines has received relatively little attention to date. Recently, it is being revisited in light of current trends towards higher operating pressures and higher levels of exhaust-gas recirculation, both of which enhance molecular gas radiation. Advanced high-efficiency engines also are expected to function closer to the limits of stable operation, where even small perturbations to the energy balance can have a large influence on system behavior. Here several different spectral radiation property models and radiative transfer equation (RTE) solvers have been implemented in an OpenFOAM-based engine CFD code, and simulations have been performed for amore » heavy-duty diesel engine. Differences in computed temperature fields, NO and soot levels, and wall heat transfer rates are shown for different combinations of spectral models and RTE solvers. The relative importance of molecular gas radiation versus soot radiation is examined. And the influence of turbulence-radiation interactions is determined by comparing results obtained using local mean values of composition and temperature to compute radiative emission and absorption with those obtained using a particle-based transported probability density function method.« less
Two-phase reduced gravity experiments for a space reactor design
NASA Technical Reports Server (NTRS)
Antoniak, Zenen I.
1987-01-01
Future space missions researchers envision using large nuclear reactors with either a single or a two-phase alkali-metal working fluid. The design and analysis of such reactors require state-of-the-art computer codes that can properly treat alkali-metal flow and heat transfer in a reduced-gravity environment. New flow regime maps, models, and correlations are required if the codes are to be successfully applied to reduced-gravity flow and heat transfer. General plans are put forth for the reduced-gravity experiments which will have to be performed, at NASA facilities, with benign fluids. Data from the reduced-gravity experiments with innocuous fluids are to be combined with normal gravity data from two-phase alkali-metal experiments. Because these reduced-gravity experiments will be very basic, and will employ small test loops of simple geometry, a large measure of commonality exists between them and experiments planned by other organizations. It is recommended that a committee be formed to coordinate all ongoing and planned reduced gravity flow experiments.
Dust emission in simulated dwarf galaxies using GRASIL-3D
NASA Astrophysics Data System (ADS)
Santos-Santos, I. M.; Domínguez-Tenreiro, R.; Granato, G. L.; Brook, C. B.; Obreja, A.
2017-03-01
Recent Herschel observations of dwarf galaxies have shown a wide diversity in the shapes of their IR-submm spectral energy distributions as compared to more massive galaxies, presenting features that cannot be explained with the current models. In order to understand the physics driving these differences, we have computed the emission of a sample of simulated dwarf galaxies using the radiative transfer code GRASIL-3D. This code separately treats the radiative transfer in dust grains from molecular clouds and cirri. The simulated galaxies have masses ranging from 10^6-10^9 M_⊙ and have evolved within a Local Group environment by using CLUES initial conditions. We show that their IR band luminosities are in agreement with observations, with their SEDs reproducing naturally the particular spectral features observed. We conclude that the GRASIL-3D two-component model gives a physical interpretation to the emission of dwarf galaxies, with molecular clouds (cirri) as the warm (cold) dust components needed to recover observational data.
User's manual for CNVUFAC, the general dynamics heat-transfer radiation view factor program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, R. L.
CNVUFAC, the General Dynamics heat-transfer radiation veiw factor program, has been adapted for use on the LLL CDC 7600 computer system. The input and output have been modified, and a node incrementing logic was included to make the code compatible with the TRUMP thermal analyzer and related codes. The program performs the multiple integration necessary to evaluate the geometric black-body radiaton node to node view factors. Card image output that contains node number and view factor information is generated for input into the related program GRAY. Program GRAY is then used to include the effects of gray-body emissivities and multiplemore » reflections, generating the effective gray-body view factors usable in TRUMP. CNVUFAC uses an elemental area summation scheme to evaluate the multiple integrals. The program permits shadowing and self-shadowing. The basic configuration shapes that can be considered are cylinders, cones, spheres, ellipsoids, flat plates, disks, toroids, and polynomials of revolution. Portions of these shapes can also be considered.« less
Automated Transfer Vehicle (ATV) Critical Safety Software Overview
NASA Astrophysics Data System (ADS)
Berthelier, D.
2002-01-01
The European Automated Transfer Vehicle is an unmanned transportation system designed to dock to International Space Station (ISS) and to contribute to the logistic servicing of the ISS. Concisely, ATV control is realized by a nominal flight control function (using computers, softwares, sensors, actuators). In order to cover the extreme situations where this nominal chain can not ensure safe trajectory with respect to ISS, a segregated proximity flight safety function is activated, where unsafe free drift trajectories can be encountered. This function relies notably on a segregated computer, the Monitoring and Safing Unit (MSU) ; in case of major ATV malfunction detection, ATV is then controlled by MSU software. Therefore, this software is critical because a MSU software failure could result in catastrophic consequences. This paper provides an overview both of this software functions and of the software development and validation method which is specific considering its criticality. First part of the paper describes briefly the proximity flight safety chain. Second part deals with the software functions. Indeed, MSU software is in charge of monitoring nominal computers and ATV corridors, using its own navigation algorithms, and, if an abnormal situation is detected, it is in charge of the ATV control during the Collision Avoidance Manoeuvre (CAM) consisting in an attitude controlled braking boost, followed by a Post-CAM manoeuvre : a Sun-pointed ATV attitude control during up to 24 hours on a safe trajectory. Monitoring, navigation and control algorithms principles are presented. Third part of this paper describes the development and validation process : algorithms functional studies , ADA coding and unit validations ; algorithms ADA code integration and validation on a specific non real-time MATLAB/SIMULINK simulator ; global software functional engineering phase, architectural design, unit testing, integration and validation on target computer.
VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory
NASA Astrophysics Data System (ADS)
Škoda, Petr; Hadrava, Petr; Fuchs, Jan
2012-04-01
VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.
NASA Technical Reports Server (NTRS)
Pizzo, Michelle; Daryabeigi, Kamran; Glass, David
2015-01-01
The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.
Numerical investigations in three-dimensional internal flows
NASA Astrophysics Data System (ADS)
Rose, William C.
1988-08-01
An investigation into the use of computational fluid dynamics (CFD) was performed to examine the expected heat transfer rates that will occur within the NASA-Ames 100 megawatt arc heater nozzle. This nozzle was tentatively designed and identified to provide research for a directly connected combustion experiment specifically related to the National Aerospace Plane Program (NASP) aircraft, and is expected to simulate the flow field entering the combustor section. It was found that extremely fine grids, that is very small mesh spacing near the wall, are required to accurately model the heat transfer process and, in fact, must contain a point within the laminar sublayer if results are to be taken directly from a numerical simulation code. In the present study, an alternative to this very fine mesh and its attendant increase in computational time was invoked and is based on a wall-function method. It was shown that solutions could be obtained that give accurate indications of surface heat transfer rate throughout the nozzle in approximately 1/100 of the computer time required to do the simulation directly without the use of the wall-function implementation. Finally, a maximum heating value in the throat region of the proposed slit nozzle for the 100 megawatt arc heater was shown to be approximately 6 MW per square meter.
Optimization of wavefront coding imaging system using heuristic algorithms
NASA Astrophysics Data System (ADS)
González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.
2017-08-01
Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Wang, W.; Melton, F. S.; Votava, P.; Milesi, C.; Hashimoto, H.; Nemani, R. R.; Hiatt, S. H.
2009-12-01
As the length and diversity of the global earth observation data records grow, modeling and analyses of biospheric conditions increasingly requires multiple terabytes of data from a diversity of models and sensors. With network bandwidth beginning to flatten, transmission of these data from centralized data archives presents an increasing challenge, and costs associated with local storage and management of data and compute resources are often significant for individual research and application development efforts. Sharing community valued intermediary data sets, results and codes from individual efforts with others that are not in direct funded collaboration can also be a challenge with respect to time, cost and expertise. We purpose a modeling, data and knowledge center that houses NASA satellite data, climate data and ancillary data where a focused community may come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform, named Ecosystem Modeling Center (EMC). With the recent development of new technologies for secure hardware virtualization, an opportunity exists to create specific modeling, analysis and compute environments that are customizable, “archiveable” and transferable. Allowing users to instantiate such environments on large compute infrastructures that are directly connected to large data archives may significantly reduce costs and time associated with scientific efforts by alleviating users from redundantly retrieving and integrating data sets and building modeling analysis codes. The EMC platform also provides the possibility for users receiving indirect assistance from expertise through prefabricated compute environments, potentially reducing study “ramp up” times.
2013-06-16
Science Dept., University of California, Irvine, USA 92697. Email : a.anandkumar@uci.edu,mjanzami@uci.edu. Daniel Hsu and Sham Kakade are with...Microsoft Research New England, 1 Memorial Drive, Cambridge, MA 02142. Email : dahsu@microsoft.com, skakade@microsoft.com 1 a latent space dimensionality...Sparse coding for multitask and transfer learning. ArxXiv preprint, abs/1209.0738, 2012. [34] G.H. Golub and C.F. Van Loan. Matrix Computations. The
DustEM: Dust extinction and emission modelling
NASA Astrophysics Data System (ADS)
Compiègne, M.; Verstraete, L.; Jones, A.; Bernard, J.-P.; Boulanger, F.; Flagey, N.; Le Bourlot, J.; Paradis, D.; Ysard, N.
2013-07-01
DustEM computes the extinction and the emission of interstellar dust grains heated by photons. It is written in Fortran 95 and is jointly developed by IAS and CESR. The dust emission is calculated in the optically thin limit (no radiative transfer) and the default spectral range is 40 to 108 nm. The code is designed so dust properties can easily be changed and mixed and to allow for the inclusion of new grain physics.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
NASA Astrophysics Data System (ADS)
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.
1995-03-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, S.; Zacharia, T.; Baltas, N.
1995-04-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less
A high-speed BCI based on code modulation VEP
NASA Astrophysics Data System (ADS)
Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai
2011-04-01
Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.
Simulation of Hanford Tank 241-C-106 Waste Release into Tank 241-Y-102
DOE Office of Scientific and Technical Information (OSTI.GOV)
KP Recknagle; Y Onishi
Waste stored in Hdord single-shell Tank 241-C-106 will be sluiced with a supernatant liquid from doubIe-shell Tank 241 -AY- 102 (AY-1 02) at the U.S. Department of Energy's Har@ord Site in Eastern Washington. The resulting slurry, containing up to 30 wtYo solids, will then be transferred to Tank AY-102. During the sluicing process, it is important to know the mass of the solids being transferred into AY- 102. One of the primary instruments used to measure solids transfer is an E+ densitometer located near the periphery of the tank at riser 15S. This study was undert.dcen to assess how wellmore » a densitometer measurement could represent the total mass of soiids transferred if a uniform lateral distribution was assumed. The study evaluated the C-1 06 slurry mixing and accumulation in Tank AY- 102 for the following five cases: Case 1: 3 wt'%0 slurry in 6.4-m AY-102 waste Case 2: 3 w-t% slurry in 4.3-m AY-102 waste Case 3: 30 wtYo slurry in 6.4-m AY-102 waste Case 4: 30 wt% slurry in 4.3-m AY-102 waste Case 5: 30 wt% slurry in 5. O-m AY-102 waste. The tirne-dependent, three-dimensional, TEMPEST computer code was used to simulate solid deposition and accumulation during the injection of the C-106 slurry into AY-102 through four injection nozzles. The TEMPEST computer code was applied previously to other Hanford tanks, AP-102, SY-102, AZ-101, SY-101, AY-102, and C-106, to model tank waste mixing with rotating pump jets, gas rollover events, waste transfer from one tank to another, and pump-out retrieval of the sluiced waste. The model results indicate that the solid depth accumulated at the densitometer is within 5% of the average depth accumulation. Thus the reading of the densitometer is expected to represent the total mass of the transferred solids reasonably well.« less
Ignition and combustion characteristics of metallized propellants
NASA Technical Reports Server (NTRS)
Mueller, D. C.; Turns, Stephen R.
1992-01-01
During this reporting period, theoretical work on the secondary atomization process was continued and the experimental apparatus was improved. A one-dimensional model of a rocket combustor, incorporating multiple droplet size classes, slurry combustion, secondary atomization, radiation heat transfer, and two-phase slip between slurry droplets and the gas flow was derived and a computer code was written to implement this model. The STANJAN chemical equilibrium solver was coupled with this code to yield gas temperature, density, and composition as functions of axial location. Preliminary results indicate that the model is performing correctly, given current model assumptions. Radiation heat transfer in the combustion chamber is treated as an optically-thick participating media problem requiring a solution of the radiative transfer equation. A cylindrical P sub 1 approximation was employed to yield an analytical expression for chamber-wall heat flux at each axial location. The code exercised to determine the effects of secondary atomization intensity, defined as the number of secondary drops produced per initial drop, on chamber burnout distance and final Al2O3 agglomerate diameter. These results indicate that only weak secondary atomization is required to significantly reduce these two parameters. Stronger atomization intensities were found to yield decreasing marginal benefits. The experimental apparatus was improved to reduce building vibration effects on the optical system alignment. This was accomplished by mounting the burner and the transmitting/receiving optics on a single frame supported by vibration-isolation legs. Calibration and shakedown tests indicate that vibration problems were eliminated and that the system is performing correctly.
Tests of Exoplanet Atmospheric Radiative Transfer Codes
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Challener, Ryan; DeLarme, Emerson; Cubillos, Patricio; Blecic, Jasmina; Foster, Austin; Garland, Justin
2016-10-01
Atmospheric radiative transfer codes are used both to predict planetary spectra and in retrieval algorithms to interpret data. Observational plans, theoretical models, and scientific results thus depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. In the process of writing our own code, we became aware of several others with artifacts of unknown origin and even outright errors in their spectra. We present a series of tests to verify atmospheric radiative-transfer codes. These include: simple, single-line line lists that, when combined with delta-function abundance profiles, should produce a broadened line that can be verified easily; isothermal atmospheres that should produce analytically-verifiable blackbody spectra at the input temperatures; and model atmospheres with a range of complexities that can be compared to the output of other codes. We apply the tests to our own code, Bayesian Atmospheric Radiative Transfer (BART) and to several other codes. The test suite is open-source software. We propose this test suite as a standard for verifying current and future radiative transfer codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Numerical simulation of weakly ionized hypersonic flow over reentry capsules
NASA Astrophysics Data System (ADS)
Scalabrin, Leonardo C.
The mathematical and numerical formulation employed in the development of a new multi-dimensional Computational Fluid Dynamics (CFD) code for the simulation of weakly ionized hypersonic flows in thermo-chemical non-equilibrium over reentry configurations is presented. The flow is modeled using the Navier-Stokes equations modified to include finite-rate chemistry and relaxation rates to compute the energy transfer between different energy modes. The set of equations is solved numerically by discretizing the flowfield using unstructured grids made of any mixture of quadrilaterals and triangles in two-dimensions or hexahedra, tetrahedra, prisms and pyramids in three-dimensions. The partial differential equations are integrated on such grids using the finite volume approach. The fluxes across grid faces are calculated using a modified form of the Steger-Warming Flux Vector Splitting scheme that has low numerical dissipation inside boundary layers. The higher order extension of inviscid fluxes in structured grids is generalized in this work to be used in unstructured grids. Steady state solutions are obtained by integrating the solution over time implicitly. The resulting sparse linear system is solved by using a point implicit or by a line implicit method in which a tridiagonal matrix is assembled by using lines of cells that are formed starting at the wall. An algorithm that assembles these lines using completely general unstructured grids is developed. The code is parallelized to allow simulation of computationally demanding problems. The numerical code is successfully employed in the simulation of several hypersonic entry flows over space capsules as part of its validation process. Important quantities for the aerothermodynamics design of capsules such as aerodynamic coefficients and heat transfer rates are compared to available experimental and flight test data and other numerical results yielding very good agreement. A sensitivity analysis of predicted radiative heating of a space capsule to several thermo-chemical non-equilibrium models is also performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5thmore » generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media modeling and magneto hydrodynamics.« less
NASA Astrophysics Data System (ADS)
Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin
2017-02-01
We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).
PROM7: 1D modeler of solar filaments or prominences
NASA Astrophysics Data System (ADS)
Gouttebroze, P.
2018-05-01
PROM7 is an update of PROM4 (ascl:1306.004) and computes simple models of solar prominences and filaments using Partial Radiative Distribution (PRD). The models consist of plane-parallel slabs standing vertically above the solar surface. Each model is defined by 5 parameters: temperature, density, geometrical thickness, microturbulent velocity and height above the solar surface. It solves the equations of radiative transfer, statistical equilibrium, ionization and pressure equilibria, and computes electron and hydrogen level population and hydrogen line profiles. Moreover, the code treats calcium atom which is reduced to 3 ionization states (Ca I, Ca II, CA III). Ca II ion has 5 levels which are useful for computing 2 resonance lines (H and K) and infrared triplet (to 8500 A).
Two Perspectives on the Origin of the Standard Genetic Code
NASA Astrophysics Data System (ADS)
Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa
2014-12-01
The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.
NASA Technical Reports Server (NTRS)
Wallace, Ron
1995-01-01
Evidence from natural and artificial membranes indicates that the neural membrane is a liquid crystal. A liquid-to-gel phase transition caused by the application of superposed electromagnetic fields to the outer membrane surface releases spin-correlated electron pairs which propagate through a charge transfer complex. The propagation generates Rydberg atoms in the lipid bilayer lattice. In the present model, charge density configurations in promoted orbitals interact as cellular automata and perform computations in Hilbert space. Due to the small binding energies of promoted orbitals, their automata are highly sensitive to microgravitational perturbations. It is proposed that spacetime is classical on the Rydberg scale, but formed of contiguous moving segments, each of which displays topological equivalence. This stochasticity is reflected in randomized Riemannian tensor values. Spacetime segments interact with charge automata as components of a computational process. At the termination of the algorithm, an orbital of high probability density is embedded in a more stabilized microscopic spacetime. This state permits the opening of an ion channel and the conversion of a quantum algorithm into a macroscopic frequency code.
Two-Equation Turbulence Models for Prediction of Heat Transfer on a Transonic Turbine Blade
NASA Technical Reports Server (NTRS)
Garg, Vijay K.; Ameri, Ali A.; Gaugler, R. E. (Technical Monitor)
2001-01-01
Two versions of the two-equation k-omega model and a shear stress transport (SST) model are used in a three-dimensional, multi-block, Navier-Stokes code to compare the detailed heat transfer measurements on a transonic turbine blade. It is found that the SST model resolves the passage vortex better on the suction side of the blade, thus yielding a better comparison with the experimental data than either of the k-w models. However, the comparison is still deficient on the suction side of the blade. Use of the SST model does require the computation of distance from a wall, which for a multiblock grid, such as in the present case, can be complicated. However, a relatively easy fix for this problem was devised. Also addressed are issues such as (1) computation of the production term in the turbulence equations for aerodynamic applications, and (2) the relation between the computational and experimental values for the turbulence length scale, and its influence on the passage vortex on the suction side of the turbine blade.
Computing Models of M-type Host Stars and their Panchromatic Spectral Output
NASA Astrophysics Data System (ADS)
Linsky, Jeffrey; Tilipman, Dennis; France, Kevin
2018-06-01
We have begun a program of computing state-of-the-art model atmospheres from the photospheres to the coronae of M stars that are the host stars of known exoplanets. For each model we are computing the emergent radiation at all wavelengths that are critical for assessingphotochemistry and mass-loss from exoplanet atmospheres. In particular, we are computing the stellar extreme ultraviolet radiation that drives hydrodynamic mass loss from exoplanet atmospheres and is essential for determing whether an exoplanet is habitable. The model atmospheres are computed with the SSRPM radiative transfer/statistical equilibrium code developed by Dr. Juan Fontenla. The code solves for the non-LTE statistical equilibrium populations of 18,538 levels of 52 atomic and ion species and computes the radiation from all species (435,986 spectral lines) and about 20,000,000 spectral lines of 20 diatomic species.The first model computed in this program was for the modestly active M1.5 V star GJ 832 by Fontenla et al. (ApJ 830, 152 (2016)). We will report on a preliminary model for the more active M5 V star GJ 876 and compare this model and its emergent spectrum with GJ 832. In the future, we will compute and intercompare semi-empirical models and spectra for all of the stars observed with the HST MUSCLES Treasury Survey, the Mega-MUSCLES Treasury Survey, and additional stars including Proxima Cen and Trappist-1.This multiyear theory program is supported by a grant from the Space Telescope Science Institute.
NASA Technical Reports Server (NTRS)
Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.
2004-01-01
Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).
Pre- and postprocessing techniques for determining goodness of computational meshes
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Westermann, T.; Bass, J. M.
1993-01-01
Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.
Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements
NASA Technical Reports Server (NTRS)
Shepperd, S. W.; Robertson, W. M.
1973-01-01
The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.
NASA Technical Reports Server (NTRS)
Wells, William L.
1990-01-01
Experimental heat transfer distributions and surface streamline directions are presented for a cylinder in the near wake of the Aeroassist Flight Experiment forebody configuration. Tests were conducted in air at a nominal free stream Mach number of 10, with post shock Reynolds numbers based on model base height of 6,450 to 50,770, and angles of attack of 5, 0, -5, and -10 degrees. Heat transfer data were obtained with thin film resistance gage and surface streamline directions by the oil flow technique. Comparisons between measured values and predicted values were made by using a Navier-Stokes computer code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, R.E.
Results are presented of studies conducted by Aerojet Nuclear Company (ANC) in FY 1975 to support the Nuclear Regulatory Commission (NRC) on the boiling water reactor blowdown heat transfer (BWR-BDHT) program. The support provided by ANC is that of an independent assessor of the program to ensure that the data obtained are adequate for verification of analytical models used for predicting reactor response to a postulated loss-of-coolant accident. The support included reviews of program plans, objectives, measurements, and actual data. Additional activity included analysis of experimental system performance and evaluation of the RELAP4 computer code as applied to the experiments.
Design characteristics of a heat pipe test chamber
NASA Technical Reports Server (NTRS)
Baker, Karl W.; Jang, J. Hoon; Yu, Juin S.
1992-01-01
LeRC has designed a heat pipe test facility which will be used to provide data for validating heat pipe computer codes. A heat pipe test chamber that uses helium gas for enhancing heat transfer was investigated. The conceptual design employs the technique of guarded heating and guarded cooling to facilitate accurate measurements of heat transfer rates to the evaporator and from the condenser. The design parameters are selected for a baseline heat pipe made of stainless steel with an inner diameter of 38.10 mm and a wall thickness of 1.016 mm. The heat pipe operates at a design temperature of 1000 K with an evaporator radial heat flux of 53 W/sq. cm.
NASA Technical Reports Server (NTRS)
Oaks, J.; Frank, A.; Falvey, S.; Lister, M.; Buisson, J.; Wardrip, C.; Warren, H.
1982-01-01
Time transfer equipment and techniques used with the Navigation Technology Satellites were modified and extended for use with the Global Positioning System (GPS) satellites. A prototype receiver was built and field tested. The receiver uses the GPS L1 link at 1575 MHz with C/A code only to resolve a measured range to the satellite. A theoretical range is computed from the satellite ephemeris transmitted in the data message and the user's coordinates. Results of user offset from GPS time are obtained by differencing the measured and theoretical ranges and applying calibration corrections. Results of the first field test evaluation of the receiver are presented.
AirShow 1.0 CFD Software Users' Guide
NASA Technical Reports Server (NTRS)
Mohler, Stanley R., Jr.
2005-01-01
AirShow is visualization post-processing software for Computational Fluid Dynamics (CFD). Upon reading binary PLOT3D grid and solution files into AirShow, the engineer can quickly see how hundreds of complex 3-D structured blocks are arranged and numbered. Additionally, chosen grid planes can be displayed and colored according to various aerodynamic flow quantities such as Mach number and pressure. The user may interactively rotate and translate the graphical objects using the mouse. The software source code was written in cross-platform Java, C++, and OpenGL, and runs on Unix, Linux, and Windows. The graphical user interface (GUI) was written using Java Swing. Java also provides multiple synchronized threads. The Java Native Interface (JNI) provides a bridge between the Java code and the C++ code where the PLOT3D files are read, the OpenGL graphics are rendered, and numerical calculations are performed. AirShow is easy to learn and simple to use. The source code is available for free from the NASA Technology Transfer and Partnership Office.
Discrete diffusion Lyman α radiative transfer
NASA Astrophysics Data System (ADS)
Smith, Aaron; Tsang, Benny T.-H.; Bromm, Volker; Milosavljević, Miloš
2018-06-01
Due to its accuracy and generality, Monte Carlo radiative transfer (MCRT) has emerged as the prevalent method for Lyα radiative transfer in arbitrary geometries. The standard MCRT encounters a significant efficiency barrier in the high optical depth, diffusion regime. Multiple acceleration schemes have been developed to improve the efficiency of MCRT but the noise from photon packet discretization remains a challenge. The discrete diffusion Monte Carlo (DDMC) scheme has been successfully applied in state-of-the-art radiation hydrodynamics (RHD) simulations. Still, the established framework is not optimal for resonant line transfer. Inspired by the DDMC paradigm, we present a novel extension to resonant DDMC (rDDMC) in which diffusion in space and frequency are treated on equal footing. We explore the robustness of our new method and demonstrate a level of performance that justifies incorporating the method into existing Lyα codes. We present computational speedups of ˜102-106 relative to contemporary MCRT implementations with schemes that skip scattering in the core of the line profile. This is because the rDDMC runtime scales with the spatial and frequency resolution rather than the number of scatterings—the latter is typically ∝τ0 for static media, or ∝(aτ0)2/3 with core-skipping. We anticipate new frontiers in which on-the-fly Lyα radiative transfer calculations are feasible in 3D RHD. More generally, rDDMC is transferable to any computationally demanding problem amenable to a Fokker-Planck approximation of frequency redistribution.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Cahalan, R. F.
2001-05-01
The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards modelers that have used approximate methods in radiation transport. In this context, different, presumably simpler, equations (such as diffusion) are used in order to make a significant gain on the efficiency axis. We will describe in some detail the most promising approaches to approximate 3D radiative transfer in clouds. Somewhat paradoxically, and in spite of its importance in the above-mentioned applications, approximate radiative transfer modeling lags significantly behind its exact counterpart because the required mathematical and computational culture is essentially alien to the native atmospheric radiation community. I3RC is receiving enough funding from NASA/HQ and DOE/ARM for its essential operations out of NASA/GSFC. However, this does not cover the time and effort of any of the participants; so only existing models were entered. At present, none of inherently approximate methods are represented, only severe truncations of some exact methods. We therefore welcome the Math/Geo initiative at NSF which should enable the proper consortia of experts in atmospheric radiation and in applied mathematics to fill an important niche.
Burner liner thermal-structural load modeling
NASA Technical Reports Server (NTRS)
Maffeo, R.
1986-01-01
The software package Transfer Analysis Code to Interface Thermal/Structural Problems (TRANCITS) was developed. The TRANCITS code is used to interface temperature data between thermal and structural analytical models. The use of this transfer module allows the heat transfer analyst to select the thermal mesh density and thermal analysis code best suited to solve the thermal problem and gives the same freedoms to the stress analyst, without the efficiency penalties associated with common meshes and the accuracy penalties associated with the manual transfer of thermal data.
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
Description and availability of the SMARTS spectral model for photovoltaic applications
NASA Astrophysics Data System (ADS)
Myers, Daryl R.; Gueymard, Christian A.
2004-11-01
Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.
NASA Astrophysics Data System (ADS)
Bates, Jason; Schmitt, Andrew; Klapisch, Marcel; Karasik, Max; Obenschain, Steve
2013-10-01
Modifications to the FAST3D code have been made to enhance its ability to simulate the dynamics of plastic ICF targets with high-Z overcoats. This class of problems is challenging computationally due in part to plasma conditions that are not in a state of local thermodynamic equilibrium and to the presence of mixed computational cells containing more than one material. Recently, new opacity tables for gold, palladium and plastic have been generated with an improved version of the STA code. These improved tables provide smoother, higher-fidelity opacity data over a wider range of temperature and density states than before, and contribute to a more accurate treatment of radiative transfer processes in FAST3D simulations. Furthermore, a new, more efficient subroutine known as ``MMEOS'' has been installed in the FAST3D code for determining pressure and temperature equilibrium conditions within cells containing multiple materials. We will discuss these topics, and present new simulation results for high-Z planar-target experiments performed recently on the NIKE Laser Facility. Work supported by DOE/NNSA.
CAVE3: A general transient heat transfer computer code utilizing eigenvectors and eigenvalues
NASA Technical Reports Server (NTRS)
Palmieri, J. V.; Rathjen, K. A.
1978-01-01
The method of solution is a hybrid analytical numerical technique which utilizes eigenvalues and eigenvectors. The method is inherently stable, permitting large time steps even with the best of conductors with the finest of mesh sizes which can provide a factor of five reduction in machine time compared to conventional explicit finite difference methods when structures with small time constants are analyzed over long time periods. This code will find utility in analyzing hypersonic missile and aircraft structures which fall naturally into this class. The code is a completely general one in that problems involving any geometry, boundary conditions and materials can be analyzed. This is made possible by requiring the user to establish the thermal network conductances between nodes. Dynamic storage allocation is used to minimize core storage requirements. This report is primarily a user's manual for CAVE3 code. Input and output formats are presented and explained. Sample problems are included which illustrate the usage of the code as well as establish the validity and accuracy of the method.
Galactic cosmic ray radiation levels in spacecraft on interplanetary missions
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Nealy, J. E.; Townsend, L. W.; Wilson, J. W.; Wood, J.S.
1994-01-01
Using the Langley Research Center Galactic Cosmic Ray (GCR) transport computer code (HZETRN) and the Computerized Anatomical Man (CAM) model, crew radiation levels inside manned spacecraft on interplanetary missions are estimated. These radiation-level estimates include particle fluxes, LET (Linear Energy Transfer) spectra, absorbed dose, and dose equivalent within various organs of interest in GCR protection studies. Changes in these radiation levels resulting from the use of various different types of shield materials are presented.
Be discs in coplanar circular binaries: Phase-locked variations of emission lines
NASA Astrophysics Data System (ADS)
Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo
2018-01-01
In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, S. L.
1998-08-25
Fluid Catalytic Cracking (FCC) technology is the most important process used by the refinery industry to convert crude oil to valuable lighter products such as gasoline. Process development is generally very time consuming especially when a small pilot unit is being scaled-up to a large commercial unit because of the lack of information to aide in the design of scaled-up units. Such information can now be obtained by analysis based on the pilot scale measurements and computer simulation that includes controlling physics of the FCC system. A Computational fluid dynamic (CFD) code, ICRKFLO, has been developed at Argonne National Laboratorymore » (ANL) and has been successfully applied to the simulation of catalytic petroleum cracking risers. It employs hybrid hydrodynamic-chemical kinetic coupling techniques, enabling the analysis of an FCC unit with complex chemical reaction sets containing tens or hundreds of subspecies. The code has been continuously validated based on pilot-scale experimental data. It is now being used to investigate the effects of scaled-up FCC units. Among FCC operating conditions, the feed injection conditions are found to have a strong impact on the product yields of scaled-up FCC units. The feed injection conditions appear to affect flow and heat transfer patterns and the interaction of hydrodynamics and cracking kinetics causes the product yields to change accordingly.« less
The experimental study of heat transfer around molds inside a model autoclave
NASA Astrophysics Data System (ADS)
Ghamlouch, Taleb; Roux, Stéphane; Lefèvre, Nicolas; Bailleul, Jean-Luc; Sobotka, Vincent
2018-05-01
The temperature distribution within composite parts manufactured inside autoclaves plays a key role in determining the parts quality at the end of the curing cycle. Indeed, heat transfer between the parts and the surroundings inside an autoclave is strongly coupled with the flow field around the molds and can be modeled through the convective heat transfer coefficient (HTC). The aerodynamically unsuitable geometry of the molds generates complex turbulent non-uniform flows around them accompanied with the presence of dead zones. This heterogeneity can imply non-uniform convective heat transfers leading to temperature gradients inside parts that can be prejudicial. Given this fact, the purpose of this study is to perform experimental measurements in order to describe the flow field and the convective heat transfer behavior around representative industrial molds installed inside a home-made model. A key point of our model autoclave is the ease of use of non-intrusive measuring instruments: the Particle Image Velocimetry (PIV) technique and infrared imaging camera for the study of the flow field and the heat transfer coefficient distribution around the molds respectively. The experimental measurements are then compared to computational fluid dynamics (CFD) calculations performed on the computer code ANSYS Fluent 16.0®. This investigation has revealed, as expected, a non-uniform distribution of the convective heat transfer coefficient around the molds and therefore the presence of thermal gradients which can reduce the composite parts quality during an autoclave process. A good agreement has been achieved between the experimental and the numerical results leading then to the validation of the performed numerical simulations.
High-throughput GPU-based LDPC decoding
NASA Astrophysics Data System (ADS)
Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin
2010-08-01
Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.
Enhancing the ABAQUS Thermomechanics Code to Simulate Steady and Transient Fuel Rod Behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Williamson; D. A. Knoll
2009-09-01
A powerful multidimensional fuels performance capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth , gap heat transfer, and gap/plenum gas behavior during irradiation. The various modeling capabilities are demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multi-pellet fuel rod, during both steady and transient operation. Computational results demonstrate the importancemore » of a multidimensional fully-coupled thermomechanics treatment. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermo-mechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.« less
LOX/LH2 vane pump for auxiliary propulsion systems
NASA Technical Reports Server (NTRS)
Hemminger, J. A.; Ulbricht, T. E.
1985-01-01
Positive displacement pumps offer potential efficiency advantages over centrifugal pumps for future low thrust space missions. Low flow rate applications, such as space station auxiliary propulsion or dedicated low thrust orbiter transfer vehicles, are typical of missions where low flow and high head rise challenge centrifugal pumps. The positive displacement vane pump for pumping of LOX and LH2 is investigated. This effort has included: (1) a testing program in which pump performance was investigated for differing pump clearances and for differing pump materials while pumping LN2, LOX, and LH2; and (2) an analysis effort, in which a comprehensive pump performance analysis computer code was developed and exercised. An overview of the theoretical framework of the performance analysis computer code is presented, along with a summary of analysis results. Experimental results are presented for pump operating in liquid nitrogen. Included are data on the effects on pump performance of pump clearance, speed, and pressure rise. Pump suction performance is also presented.
Single-exposure quantitative phase imaging in color-coded LED microscopy.
Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin
2017-04-03
We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.
NASA Astrophysics Data System (ADS)
Wu, Hong; Li, Peng; Li, Yulong
2016-02-01
This paper describes the calculation method for unsteady state conditions in the secondary air systems in gas turbines. The 1D-3D-Structure coupled method was applied. A 1D code was used to model the standard components that have typical geometric characteristics. Their flow and heat transfer were described by empirical correlations based on experimental data or CFD calculations. A 3D code was used to model the non-standard components that cannot be described by typical geometric languages, while a finite element analysis was carried out to compute the structural deformation and heat conduction at certain important positions. These codes were coupled through their interfaces. Thus, the changes in heat transfer and structure and their interactions caused by exterior disturbances can be reflected. The results of the coupling method in an unsteady state showed an apparent deviation from the existing data, while the results in the steady state were highly consistent with the existing data. The difference in the results in the unsteady state was caused primarily by structural deformation that cannot be predicted by the 1D method. Thus, in order to obtain the unsteady state performance of a secondary air system more accurately and efficiently, the 1D-3D-Structure coupled method should be used.
NASA Technical Reports Server (NTRS)
Kiris, Cetin
1995-01-01
Development of an incompressible Navier-Stokes solution procedure was performed for the analysis of a liquid rocket engine pump components and for the mechanical heart assist devices. The solution procedure for the propulsion systems is applicable to incompressible Navier-Stokes flows in a steadily rotating frame of reference for any general complex configurations. The computer codes were tested on different complex configurations such as liquid rocket engine inducer and impellers. As a spin-off technology from the turbopump component simulations, the flow analysis for an axial heart pump was conducted. The baseline Left Ventricular Assist Device (LVAD) design was improved by adding an inducer geometry by adapting from the liquid rocket engine pump. The time-accurate mode of the incompressible Navier-Stokes code was validated with flapping foil experiment by using different domain decomposition methods. In the flapping foil experiment, two upstream NACA 0025 foils perform high-frequency synchronized motion and generate unsteady flow conditions for a downstream larger stationary foil. Fairly good agreement was obtained between unsteady experimental data and numerical results from two different moving boundary procedures. Incompressible Navier-Stokes code (INS3D) has been extended for heat transfer applications. The temperature equation was written for both forced and natural convection phenomena. Flow in a square duct case was used for the validation of the code in both natural and forced convection.
NEAMS Update. Quarterly Report for October - December 2011.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, K.
2012-02-16
The Advanced Modeling and Simulation Office within the DOE Office of Nuclear Energy (NE) has been charged with revolutionizing the design tools used to build nuclear power plants during the next 10 years. To accomplish this, the DOE has brought together the national laboratories, U.S. universities, and the nuclear energy industry to establish the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Program. The mission of NEAMS is to modernize computer modeling of nuclear energy systems and improve the fidelity and validity of modeling results using contemporary software environments and high-performance computers. NEAMS will create a set of engineering-level codes aimedmore » at designing and analyzing the performance and safety of nuclear power plants and reactor fuels. The truly predictive nature of these codes will be achieved by modeling the governing phenomena at the spatial and temporal scales that dominate the behavior. These codes will be executed within a simulation environment that orchestrates code integration with respect to spatial meshing, computational resources, and execution to give the user a common 'look and feel' for setting up problems and displaying results. NEAMS is building upon a suite of existing simulation tools, including those developed by the federal Scientific Discovery through Advanced Computing and Advanced Simulation and Computing programs. NEAMS also draws upon existing simulation tools for materials and nuclear systems, although many of these are limited in terms of scale, applicability, and portability (their ability to be integrated into contemporary software and hardware architectures). NEAMS investments have directly and indirectly supported additional NE research and development programs, including those devoted to waste repositories, safeguarded separations systems, and long-term storage of used nuclear fuel. NEAMS is organized into two broad efforts, each comprising four elements. The quarterly highlights October-December 2011 are: (1) Version 1.0 of AMP, the fuel assembly performance code, was tested on the JAGUAR supercomputer and released on November 1, 2011, a detailed discussion of this new simulation tool is given; (2) A coolant sub-channel model and a preliminary UO{sub 2} smeared-cracking model were implemented in BISON, the single-pin fuel code, more information on how these models were developed and benchmarked is given; (3) The Object Kinetic Monte Carlo model was implemented to account for nucleation events in meso-scale simulations and a discussion of the significance of this advance is given; (4) The SHARP neutronics module, PROTEUS, was expanded to be applicable to all types of reactors, and a discussion of the importance of PROTEUS is given; (5) A plan has been finalized for integrating the high-fidelity, three-dimensional reactor code SHARP with both the systems-level code RELAP7 and the fuel assembly code AMP. This is a new initiative; (6) Work began to evaluate the applicability of AMP to the problem of dry storage of used fuel and to define a relevant problem to test the applicability; (7) A code to obtain phonon spectra from the force-constant matrix for a crystalline lattice has been completed. This important bridge between subcontinuum and continuum phenomena is discussed; (8) Benchmarking was begun on the meso-scale, finite-element fuels code MARMOT to validate its new variable splitting algorithm; (9) A very computationally demanding simulation of diffusion-driven nucleation of new microstructural features has been completed. An explanation of the difficulty of this simulation is given; (10) Experiments were conducted with deformed steel to validate a crystal plasticity finite-element code for bodycentered cubic iron; (11) The Capability Transfer Roadmap was completed and published as an internal laboratory technical report; (12) The AMP fuel assembly code input generator was integrated into the NEAMS Integrated Computational Environment (NiCE). More details on the planned NEAMS computing environment is given; and (13) The NEAMS program website (neams.energy.gov) is nearly ready to launch.« less
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
NASA Technical Reports Server (NTRS)
Kittleson, John K.; Yu, Yung H.
1987-01-01
Holographic interferometry and computerized aided tomography (CAT) are used to determine the transonic velocity field of a model rotor blade in hover. A pulsed ruby laser recorded 40 interferograms with a 2 ft dia view field near the model rotor blade tip operating at a tip Mach number of 0.90. After digitizing the interferograms and extracting the fringe order functions, the data are transferred to a CAT code. The CAT code then calculates the perturbation velocity in several planes above the blade surface. The values from the holography-CAT method compare favorably with previously obtained numerical computations in most locations near the blade tip. The results demonstrate the technique's potential for three dimensional transonic rotor flow studies.
New Challenges in Computational Thermal Hydraulics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadigaroglu, George; Lakehal, Djamel
New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less
Hot air impingement on a flat plate using Large Eddy Simulation (LES) technique
NASA Astrophysics Data System (ADS)
Plengsa-ard, C.; Kaewbumrung, M.
2018-01-01
Impinging hot gas jets to a flat plate generate very high heat transfer coefficients in the impingement zone. The magnitude of heat transfer prediction near the stagnation point is important and accurate heat flux distribution are needed. This research studies on heat transfer and flow field resulting from a single hot air impinging wall. The simulation is carried out using computational fluid dynamics (CFD) commercial code FLUENT. Large Eddy Simulation (LES) approach with a subgrid-scale Smagorinsky-Lilly model is present. The classical Werner-Wengle wall model is used to compute the predicted results of velocity and temperature near walls. The Smagorinsky constant in the turbulence model is set to 0.1 and is kept constant throughout the investigation. The hot gas jet impingement on the flat plate with a constant surface temperature is chosen to validate the predicted heat flux results with experimental data. The jet Reynolds number is equal to 20,000 and a fixed jet-to-plate spacing of H/D = 2.0. Nusselt number on the impingement surface is calculated. As predicted by the wall model, the instantaneous computed Nusselt number agree fairly well with experimental data. The largest values of calculated Nusselt number are near the stagnation point and decrease monotonically in the wall jet region. Also, the contour plots of instantaneous values of wall heat flux on a flat plate are captured by LES simulation.
Topology optimization of natural convection: Flow in a differentially heated cavity
NASA Astrophysics Data System (ADS)
Saglietti, Clio; Schlatter, Philipp; Berggren, Martin; Henningson, Dan
2017-11-01
The goal of the present work is to develop methods for optimization of the design of natural convection cooled heat sinks, using resolved simulation of both fluid flow and heat transfer. We rely on mathematical programming techniques combined with direct numerical simulations in order to iteratively update the topology of a solid structure towards optimality, i.e. until the design yielding the best performance is found, while satisfying a specific set of constraints. The investigated test case is a two-dimensional differentially heated cavity, in which the two vertical walls are held at different temperatures. The buoyancy force induces a swirling convective flow around a solid structure, whose topology is optimized to maximize the heat flux through the cavity. We rely on the spectral-element code Nek5000 to compute a high-order accurate solution of the natural convection flow arising from the conjugate heat transfer in the cavity. The laminar, steady-state solution of the problem is evaluated with a time-marching scheme that has an increased convergence rate; the actual iterative optimization is obtained using a steepest-decent algorithm, and the gradients are conveniently computed using the continuous adjoint equations for convective heat transfer.
Modeling Radiative Heat Transfer and Turbulence-Radiation Interactions in Engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Chandan; Sircar, Arpan; Ferreyro-Fernandez, Sebastian
Detailed radiation modelling in piston engines has received relatively little attention to date. Recently, it is being revisited in light of current trends towards higher operating pressures and higher levels of exhaust-gas recirculation, both of which enhance molecular gas radiation. Advanced high-efficiency engines also are expected to function closer to the limits of stable operation, where even small perturbations to the energy balance can have a large influence on system behavior. Here several different spectral radiation property models and radiative transfer equation (RTE) solvers have been implemented in an OpenFOAM-based engine CFD code, and simulations have been performed for amore » full-load (peak pressure ~200 bar) heavy-duty diesel engine. Differences in computed temperature fields, NO and soot levels, and wall heat transfer rates are shown for different combinations of spectral models and RTE solvers. The relative importance of molecular gas radiation versus soot radiation is examined. And the influence of turbulence-radiation interactions is determined by comparing results obtained using local mean values of composition and temperature to compute radiative emission and absorption with those obtained using a particle-based transported probability density function method.« less
Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan
2013-01-01
Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations
NASA Astrophysics Data System (ADS)
Cubillos, Patricio E.
2017-11-01
Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2012-01-01
The purpose of this report is to summarize and document the work done to enable a NASA CFD code to model laminar-turbulent transition process on an isolated turbine blade. The ultimate purpose of the present work is to down-select a transition model that would allow the flow simulation of a variable speed power turbine to be accurately performed. The flow modeling in its final form will account for the blade row interactions and their effects on transition which would lead to accurate accounting for losses. The present work only concerns itself with steady flows of variable inlet turbulence. The low Reynolds number k- model of Wilcox and a modified version of the same model will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k- model and its modified variant fail to simulate the transition with any degree of accuracy. A case is thus made for the adoption of more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored. The three-equation model of Walters and Leylek was thought to be in a relatively mature state of development and was implemented in the Glenn-HT code. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Surface heat transfer rate serves as sensitive indicator of transition. With the newly implemented model, it was shown that the simulation of transition process is much improved over the baseline k- model for the single Reynolds number and pressure ratio attempted; while agreement with heat transfer data became more satisfactory. Armed with the new transition model, total-pressure losses of computed three-dimensional flow of E3 tip section cascade were compared to the experimental data for a range of incidence angles. The results obtained, form a partial loss bucket for the chosen blade. In time the loss bucket will be populated with losses at additional incidences. Results obtained thus far will be discussed herein.
NASA Astrophysics Data System (ADS)
Santos, M. V.; Lespinard, A. R.
2011-12-01
The shelf life of mushrooms is very limited since they are susceptible to physical and microbial attack; therefore they are usually blanched and immediately frozen for commercial purposes. The aim of this work was to develop a numerical model using the finite element technique to predict freezing times of mushrooms considering the actual shape of the product. The original heat transfer equation was reformulated using a combined enthalpy-Kirchhoff formulation, therefore an own computational program using Matlab 6.5 (MathWorks, Natick, Massachusetts) was developed, considering the difficulties encountered when simulating this non-linear problem in commercial softwares. Digital images were used to generate the irregular contour and the domain discretization. The numerical predictions agreed with the experimental time-temperature curves during freezing of mushrooms (maximum absolute error <3.2°C) obtaining accurate results and minimum computer processing times. The codes were then applied to determine required processing times for different operating conditions (external fluid temperatures and surface heat transfer coefficients).
GPU accelerated study of heat transfer and fluid flow by lattice Boltzmann method on CUDA
NASA Astrophysics Data System (ADS)
Ren, Qinlong
Lattice Boltzmann method (LBM) has been developed as a powerful numerical approach to simulate the complex fluid flow and heat transfer phenomena during the past two decades. As a mesoscale method based on the kinetic theory, LBM has several advantages compared with traditional numerical methods such as physical representation of microscopic interactions, dealing with complex geometries and highly parallel nature. Lattice Boltzmann method has been applied to solve various fluid behaviors and heat transfer process like conjugate heat transfer, magnetic and electric field, diffusion and mixing process, chemical reactions, multiphase flow, phase change process, non-isothermal flow in porous medium, microfluidics, fluid-structure interactions in biological system and so on. In addition, as a non-body-conformal grid method, the immersed boundary method (IBM) could be applied to handle the complex or moving geometries in the domain. The immersed boundary method could be coupled with lattice Boltzmann method to study the heat transfer and fluid flow problems. Heat transfer and fluid flow are solved on Euler nodes by LBM while the complex solid geometries are captured by Lagrangian nodes using immersed boundary method. Parallel computing has been a popular topic for many decades to accelerate the computational speed in engineering and scientific fields. Today, almost all the laptop and desktop have central processing units (CPUs) with multiple cores which could be used for parallel computing. However, the cost of CPUs with hundreds of cores is still high which limits its capability of high performance computing on personal computer. Graphic processing units (GPU) is originally used for the computer video cards have been emerged as the most powerful high-performance workstation in recent years. Unlike the CPUs, the cost of GPU with thousands of cores is cheap. For example, the GPU (GeForce GTX TITAN) which is used in the current work has 2688 cores and the price is only 1,000 US dollars. The release of NVIDIA's CUDA architecture which includes both hardware and programming environment in 2007 makes GPU computing attractive. Due to its highly parallel nature, lattice Boltzmann method is successfully ported into GPU with a performance benefit during the recent years. In the current work, LBM CUDA code is developed for different fluid flow and heat transfer problems. In this dissertation, lattice Boltzmann method and immersed boundary method are used to study natural convection in an enclosure with an array of conduting obstacles, double-diffusive convection in a vertical cavity with Soret and Dufour effects, PCM melting process in a latent heat thermal energy storage system with internal fins, mixed convection in a lid-driven cavity with a sinusoidal cylinder, and AC electrothermal pumping in microfluidic systems on a CUDA computational platform. It is demonstrated that LBM is an efficient method to simulate complex heat transfer problems using GPU on CUDA.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
NASA Astrophysics Data System (ADS)
Juvela, Mika J.
The relationship between physical conditions of an interstellar cloud and the observed radiation is defined by the radiative transfer problem. Radiative transfer calculations are needed if, e.g., one wants to disentangle abundance variations from excitation effects or wants to model variations of dust properties inside an interstellar cloud. New observational facilities (e.g., ALMA and Herschel) will bring improved accuracy both in terms of intensity and spatial resolution. This will enable detailed studies of the densest sub-structures of interstellar clouds and star forming regions. Such observations must be interpreted with accurate radiative transfer methods and realistic source models. In many cases this will mean modelling in three dimensions. High optical depths and observed wide range of linear scales are, however, challenging for radiative transfer modelling. A large range of linear scales can be accessed only with hierarchical models. Figure 1 shows an example of the use of a hierarchical grid for radiative transfer calculations when the original model cloud (L=10 pc,
Energy efficient rateless codes for high speed data transfer over free space optical channels
NASA Astrophysics Data System (ADS)
Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.
2015-03-01
Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.
HAL/S - The programming language for Shuttle
NASA Technical Reports Server (NTRS)
Martin, F. H.
1974-01-01
HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.
Swept shock/boundary layer interaction experiments in support of CFD code validation
NASA Technical Reports Server (NTRS)
Settles, G. S.; Lee, Y.
1990-01-01
Research on the topic of shock wave/turbulent boundary layer interaction was carried out. Skin friction and surface pressure measurements in fin-induced, swept interactions were conducted, and heat transfer measurements in the same flows are planned. The skin friction data for a strong interaction case (Mach 4, fin-angles equal 16 and 20 degrees) were obtained, and their comparison with computational results was published. Surface pressure data for weak-to-strong fin interactions were also obtained.
Exhaust plume impingement of chemically reacting gas-particle flows
NASA Technical Reports Server (NTRS)
Smith, S. D.; Penny, M. M.; Greenwood, T. F.; Roberts, B. B.
1975-01-01
A series of computer codes has been developed to predict gas-particle flows and resulting impingement forces, moments and heating rates to surfaces immersed in the flow. The gas-particle flow solution is coupled via heat transfer and drag between the phases with chemical effects included in the gas phase. The flow solution and impingement calculations are discussed. Analytical results are compared with test data obtained to evaluate gas-particle effects on the Space Shuttle thermal protection system during the staging maneuver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsuga, Ken; Takahashi, Hiroyuki R.
2016-02-20
We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.
Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.
NASA Technical Reports Server (NTRS)
Bauer, Christopher
1993-01-01
Stirling engine heat exchangers are shell-and-tube type with oscillatory flow (zero-mean velocity) for the inner fluid. This heat transfer process involves laminar-transition turbulent flow motions under oscillatory flow conditions. A low Reynolds number kappa-epsilon model, (Lam-Bremhorst form), was utilized in the present study to simulate fluid flow and heat transfer in a circular tube. An empirical transition model was used to activate the low Reynolds number k-e model at the appropriate time within the cycle for a given axial location within the tube. The computational results were compared with experimental flow and heat transfer data for: (1) velocity profiles, (2) kinetic energy of turbulence, (3) skin friction factor, (4) temperature profiles, and (5) wall heat flux. The experimental data were obtained for flow in a tube (38 mm diameter and 60 diameter long), with the maximum Reynolds number based on velocity being Re(sub max) = 11840, a dimensionless frequency (Valensi number) of Va = 80.2, at three axial locations X/D = 16, 30 and 44. The agreement between the computations and the experiment is excellent in the laminar portion of the cycle and good in the turbulent portion. Moreover, the location of transition was predicted accurately. The Low Reynolds Number kappa-epsilon model, together with an empirical transition model, is proposed herein to generate the wall heat flux values at different operating parameters than the experimental conditions. Those computational data can be used for testing the much simpler and less accurate one dimensional models utilized in 1-D Stirling Engine design codes.
Numerical modeling of heat transfer and pasteurizing value during thermal processing of intact egg.
Abbasnezhad, Behzad; Hamdami, Nasser; Monteau, Jean-Yves; Vatankhah, Hamed
2016-01-01
Thermal Pasteurization of Eggs, as a widely used nutritive food, has been simulated. A three-dimensional numerical model, computational fluid dynamics codes of heat transfer equations using heat natural convection, and conduction mechanisms, based on finite element method, was developed to study the effect of air cell size and eggshell thickness. The model, confirmed by comparing experimental and numerical results, was able to predict the temperature profiles, the slowest heating zone, and the required heating time during pasteurization of intact eggs. The results showed that the air cell acted as a heat insulator. Increasing the air cell volume resulted in decreasing of the heat transfer rate, and the increasing the required time of pasteurization (up to 14%). The findings show that the effect on thermal pasteurization of the eggshell thickness was not considerable in comparison to the air cell volume.
NASA Astrophysics Data System (ADS)
Cathala, Thierry; Douchin, Nicolas; Latger, Jean; Caillault, Karine; Fauqueux, Sandrine; Huet, Thierry; Lubarre, Luc; Malherbe, Claire; Rosier, Bernard; Simoneau, Pierre
2009-05-01
The SE-WORKBENCH workshop, also called CHORALE (French acceptation for "simulated Optronic Acoustic Radar battlefield") is used by the French DGA (MoD) and several other Defense organizations and companies all around the World to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes that may contain several types of target, and then generate the physical signal received by a sensor, typically an IR sensor. The SE-WORKBENCH can be used either as a collection of software modules through dedicated GUIs or as an API made of a large number of specialized toolkits. The SE-WORKBENCH is made of several functional block: one for geometrically and physically modeling the terrain and the targets, one for building the simulation scenario and one for rendering the synthetic environment, both in real and non real time. Among the modules that the modeling block is composed of, SE-ATMOSPHERE is used to simulate the atmospheric conditions of a Synthetic Environment and then to integrate the impact of these conditions on a scene. This software product generates an exploitable physical atmosphere by the SE WORKBENCH tools generating spectral images. It relies on several external radiative transfer models such as MODTRAN V4.2 in the current version. MATISSE [4,5] is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiative quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). Backgrounds include atmosphere, low and high altitude clouds, sea and land. A particular characteristic of the code is its ability to take into account atmospheric spatial variability (temperatures, mixing ratio, etc) along each line of sight. An Application Programming Interface (API) is included to facilitate its use in conjunction with external codes. MATISSE is currently considered as a new external radiative transfer model to be integrated in SE-ATMOSPHERE as a complement to MODTRAN. Compared to the latter which is used as a whole MATISSE can be used step by step and modularly as an API: this can avoid to pre compute large atmospheric parameters tables as it is done currently with MODTRAN. The use of MATISSE will also enable a real coupling between the ray tracing process of the SEWORKBENCH and the radiative transfer model of MATISSE. This will lead to the improvement of the link between a general atmospheric model and a specific 3D terrain. The paper will demonstrate the advantages for the SE WORKEBNCH of using MATISSE as a new atmospheric code, but also for computing the radiative properties of the sea surface.
SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Costello, F. A.
1994-01-01
The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component. On the first pass, the user finds that the calculated outlet conditions of the last component do not match the estimated inlet conditions of the first. The user then modifies the estimated inlet conditions of the first component in an attempt to match the calculated values. The user estimated values are called State Variables. The differences between the user estimated values and calculated values are called the Error Variables. The procedure systematically changes the State Variables until all of the Error Variables are less than the user-specified iteration limits. The solution procedure is referred to as SCX. It consists of two phases, the Systems phase and the Controller phase. The X is to imply experimental. SCX computes each next set of State Variables in two phases. In the first phase, SCX fixes the controller positions and modifies the other State Variables by the Newton-Raphson method. This first phase is the Systems phase. Once the Newton-Raphson method has solved the problem for the fixed controller positions, SCX next calculates new controller positions based on Newton's method while treating each sensor-controller pair independently but allowing all to change in one iteration. This phase is the Controller phase. SINFAC is available by license for a period of ten (10) years to approved licensees. The licenced program product includes the source code for the additional routines to SINDA, the SINDA object code, command procedures, sample data and supporting documentation. Additional documentation may be purchased at the price below. SINFAC was created for use on a DEC VAX under VMS. Source code is written in FORTRAN 77, requires 180k of memory, and should be fully transportable. The program was developed in 1988.
Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunz, Josiah; Snopok, Pavel; Berz, Martin
Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochasticmore » nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.« less
Experimental validation of pulsed column inventory estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyerlein, A.L.; Geldard, J.F.; Weh, R.
Near-real-time accounting (NRTA) for reprocessing plants relies on the timely measurement of all transfers through the process area and all inventory in the process. It is difficult to measure the inventory of the solvent contractors; therefore, estimation techniques are considered. We have used experimental data obtained at the TEKO facility in Karlsruhe and have applied computer codes developed at Clemson University to analyze this data. For uranium extraction, the computer predictions agree to within 15% of the measured inventories. We believe this study is significant in demonstrating that using theoretical models with a minimum amount of process data may bemore » an acceptable approach to column inventory estimation for NRTA. 15 refs., 7 figs.« less
Overview of aerothermodynamic loads definition study
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
1991-01-01
The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.
Accelerating next generation sequencing data analysis with system level optimizations.
Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid
2017-08-22
Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.
Analysis of longwave radiation for the Earth-atmosphere system
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Venuru, C. S.; Subramanian, S. V.
1983-01-01
Accurate radiative transfer models are used to determine the upwelling atmospheric radiance and net radiative flux in the entire longwave spectral range. The validity of the quasi-random band model is established by comparing the results of this model with those of line-by-line formulations and with available theoretical and experimental results. Existing radiative transfer models and computer codes are modified to include various surface and atmospheric effects (surface reflection, nonequilibrium radiation, and cloud effects). The program is used to evaluate the radiative flux in clear atmosphere, provide sensitivity analysis of upwelling radiance in the presence of clouds, and determine the effects of various climatological parameters on the upwelling radiation and anisotropic function. Homogeneous and nonhomogeneous gas emissivities can also be evaluated under different conditions.
Frank, Jeffrey I.; Rosengart, Axel J.; Kasza, Ken; Yu, Wenhua; Chien, Tai-Hsin; Franklin, Jeff
2006-10-10
Apparatuses, systems, methods, and computer code for, among other things, monitoring the health of samples such as the brain while providing local cooling or heating. A representative device is a heat transfer probe, which includes an inner channel, a tip, a concentric outer channel, a first temperature sensor, and a second temperature sensor. The inner channel is configured to transport working fluid from an inner inlet to an inner outlet. The tip is configured to receive at least a portion of the working fluid from the inner outlet. The concentric outer channel is configured to transport the working fluid from the inner outlet to an outer outlet. The first temperature sensor is coupled to the tip, and the second temperature sensor spaced apart from the first temperature sensor.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-21
...The U.S. Department of Commerce and U.S. Department of Homeland Security are requesting information on the requirements of, and possible approaches to creating, a voluntary industry code of conduct to address the detection, notification and mitigation of botnets.\\1\\ Over the past several years, botnets have increasingly put computer owners at risk. A botnet infection can lead to the monitoring of a consumer's personal information and communication, and exploitation of that consumer's computing power and Internet access. Networks of these compromised computers are often used to disseminate spam, to store and transfer illegal content, and to attack the servers of government and private entities with massive, distributed denial of service attacks. The Departments seek public comment from all Internet stakeholders, including the commercial, academic, and civil society sectors, on potential models for detection, notification, prevention, and mitigation of botnets' illicit use of computer equipment. ---------------------------------------------------------------------------
NASA Astrophysics Data System (ADS)
Grose, C. J.
2008-05-01
Numerical geodynamics models of heat transfer are typically thought of as specialized topics of research requiring knowledge of specialized modelling software, linux platforms, and state-of-the-art finite-element codes. I have implemented analytical and numerical finite-difference techniques with Microsoft Excel 2007 spreadsheets to solve for complex solid-earth heat transfer problems for use by students, teachers, and practicing scientists without specialty in geodynamics modelling techniques and applications. While implementation of equations for use in Excel spreadsheets is occasionally cumbersome, once case boundary structure and node equations are developed, spreadsheet manipulation becomes routine. Model experimentation by modifying parameter values, geometry, and grid resolution makes Excel a useful tool whether in the classroom at the undergraduate or graduate level or for more engaging student projects. Furthermore, the ability to incorporate complex geometries and heat-transfer characteristics makes it ideal for first and occasionally higher order geodynamics simulations to better understand and constrain the results of professional field research in a setting that does not require the constraints of state-of-the-art modelling codes. The straightforward expression and manipulation of model equations in excel can also serve as a medium to better understand the confusing notations of advanced mathematical problems. To illustrate the power and robustness of computation and visualization in spreadsheet models I focus primarily on one-dimensional analytical and two-dimensional numerical solutions to two case problems: (i) the cooling of oceanic lithosphere and (ii) temperatures within subducting slabs. Excel source documents will be made available.
Comparison of Predicted and Measured Turbine Vane Rough Surface Heat Transfer
NASA Technical Reports Server (NTRS)
Boyle, R. J.; Spuckler, C. M.; Lucci, B. L.
2000-01-01
The proposed paper compares predicted turbine vane heat transfer for a rough surface over a wide range of test conditions with experimental data. Predictions were made for the entire vane surface. However, measurements were made only over the suction surface of the vane, and the leading edge region of the pressure surface. Comparisons are shown for a wide range of test conditions. Inlet pressures varied between 3 and 15 psia, and exit Mach numbers ranged between 0.3 and 0.9. Thus, while a single roughened vane was used for the tests, the effective rougness,(k(sup +)), varied by more than a factor of ten. Results were obtained for freestream turbulence levels of 1 and 10%. Heat transfer predictions were obtained using the Navier-Stokes computer code RVCQ3D. Two turbulence models, suitable for rough surface analysis, are incorporated in this code. The Cebeci-Chang roughness model is part of the algebraic turbulence model. The k-omega turbulence model accounts for the effect of roughness in the application of the boundary condition. Roughness causes turbulent flow over the vane surface. Even after accounting for transition, surface roughness significantly increased heat transfer compared to a smooth surface. The k-omega results agreed better with the data than the Cebeci-Chang model. However, the low Reynolds number k-omega model did not accurately account for roughness when the freestream turbulence level was low. The high Reynolds number version of this model was more suitable when the freestream turbulence was low.
Development of RWHet to Simulate Contaminant Transport in Fractured Porous Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yong; LaBolle, Eric; Reeves, Donald M
2012-07-01
Accurate simulation of matrix diffusion in regional-scale dual-porosity and dual-permeability media is a critical issue for the DOE Underground Test Area (UGTA) program, given the prevalence of fractured geologic media on the Nevada National Security Site (NNSS). Contaminant transport through regional-scale fractured media is typically quantified by particle-tracking based Lagrangian solvers through the inclusion of dual-domain mass transfer algorithms that probabilistically determine particle transfer between fractures and unfractured matrix blocks. UGTA applications include a wide variety of fracture aperture and spacing, effective diffusion coefficients ranging four orders of magnitude, and extreme end member retardation values. This report incorporates the currentmore » dual-domain mass transfer algorithms into the well-known particle tracking code RWHet [LaBolle, 2006], and then tests and evaluates the updated code. We also develop and test a direct numerical simulation (DNS) approach to replace the classical transfer probability method in characterizing particle dynamics across the fracture/matrix interface. The final goal of this work is to implement the algorithm identified as most efficient and effective into RWHet, so that an accurate and computationally efficient software suite can be built for dual-porosity/dual-permeability applications. RWHet is a mature Lagrangian transport simulator with a substantial user-base that has undergone significant development and model validation. In this report, we also substantially tested the capability of RWHet in simulating passive and reactive tracer transport through regional-scale, heterogeneous media. Four dual-domain mass transfer methodologies were considered in this work. We first developed the empirical transfer probability approach proposed by Liu et al. [2000], and coded it into RWHet. The particle transfer probability from one continuum to the other is proportional to the ratio of the mass entering the other continuum to the mass in the current continuum. Numerical examples show that this method is limited to certain ranges of parameters, due to an intrinsic assumption of an equilibrium concentration profile in the matrix blocks in building the transfer probability. Subsequently, this method fails in describing mass transfer for parameter combinations that violate this assumption, including small diffusion coefficients (i.e., the free-water molecular diffusion coefficient 1×10-11 meter2/second), relatively large fracture spacings (such as meter), and/or relatively large matrix retardation coefficients (i.e., ). These “outliers” in parameter range are common in UGTA applications. To address the above limitations, we then developed a Direct Numerical Simulation (DNS)-Reflective method. The novel DNS-Reflective method can directly track the particle dynamics across the fracture/matrix interface using a random walk, without any empirical assumptions. This advantage should make the DNS-Reflective method feasible for a wide range of parameters. Numerical tests of the DNS-Reflective, however, show that the method is computationally very demanding, since the time step must be very small to resolve particle transfer between fractures and matrix blocks. To improve the computational efficiency of the DNS approach, we then adopted Roubinet et al.’s method [2009], which uses first passage time distributions to simulate dual-domain mass transfer. The DNS-Roubinet method was found to be computationally more efficient than the DNS-Reflective method. It matches the analytical solution for the whole range of major parameters (including diffusion coefficient and fracture aperture values that are considered “outliers” for Liu et al.’s transfer probability method [2000]) for a single fracture system. The DNS-Roubinet method, however, has its own disadvantage: for a parallel fracture system, the truncation of the first passage time distribution creates apparent errors when the fracture spacing is small, and thus it tends to erroneously predict breakthrough curves (BTCs) for the parallel fracture system. Finally, we adopted the transient range approach proposed by Pan and Bodvarsson [2002] in RWHet. In this method, particle transfer between fractures and matrix blocks can be resolved without using very small time steps. It does not use any truncation of the first passage time distribution for particles. Hence it does not have the limitation identified above for the DNS-Reflective method and the DNS-Roubinet method. Numerical results were checked against analytical solutions, and also compared to DCPTV2.0 [Pan, 2002]. This version of RWHet (called RWHet-Pan&Bodvarsson in this report) can accurately capture contaminant transport in fractured porous media for a full range of parameters without any practical or theoretical limitations.« less
Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.
2012-01-01
Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostuk, M.; Uram, T. D.; Evans, T.
For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less
Kostuk, M.; Uram, T. D.; Evans, T.; ...
2018-02-01
For the first time, an automatically triggered, between-pulse fusion science analysis code was run on-demand at a remotely located supercomputer at Argonne Leadership Computing Facility (ALCF, Lemont, IL) in support of in-process experiments being performed at DIII-D (San Diego, CA). This represents a new paradigm for combining geographically distant experimental and high performance computing (HPC) facilities to provide enhanced data analysis that is quickly available to researchers. Enhanced analysis improves the understanding of the current pulse, translating into a more efficient use of experimental resources, and to the quality of the resultant science. The analysis code used here, called SURFMN,more » calculates the magnetic structure of the plasma using Fourier transform. Increasing the number of Fourier components provides a more accurate determination of the stochastic boundary layer near the plasma edge by better resolving magnetic islands, but requires 26 minutes to complete using local DIII-D resources, putting it well outside the useful time range for between pulse analysis. These islands relate to confinement and edge localized mode (ELM) suppression, and may be controlled by adjusting coil currents for the next pulse. Argonne has ensured on-demand execution of SURFMN by providing a reserved queue, a specialized service that launches the code after receiving an automatic trigger, and with network access from the worker nodes for data transfer. Runs are executed on 252 cores of ALCF’s Cooley cluster and the data is available locally at DIII-D within three minutes of triggering. The original SURFMN design limits additional improvements with more cores, however our work shows a path forward where codes that benefit from thousands of processors can run between pulses.« less
Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R
2012-01-01
Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".
Integrated control and health management. Orbit transfer rocket engine technology program
NASA Technical Reports Server (NTRS)
Holzmann, Wilfried A.; Hayden, Warren R.
1988-01-01
To insure controllability of the baseline design for a 7500 pound thrust, 10:1 throttleable, dual expanded cycle, Hydrogen-Oxygen, orbit transfer rocket engine, an Integrated Controls and Health Monitoring concept was developed. This included: (1) Dynamic engine simulations using a TUTSIM derived computer code; (2) analysis of various control methods; (3) Failure Modes Analysis to identify critical sensors; (4) Survey of applicable sensors technology; and, (5) Study of Health Monitoring philosophies. The engine design was found to be controllable over the full throttling range by using 13 valves, including an oxygen turbine bypass valve to control mixture ratio, and a hydrogen turbine bypass valve, used in conjunction with the oxygen bypass to control thrust. Classic feedback control methods are proposed along with specific requirements for valves, sensors, and the controller. Expanding on the control system, a Health Monitoring system is proposed including suggested computing methods and the following recommended sensors: (1) Fiber optic and silicon bearing deflectometers; (2) Capacitive shaft displacement sensors; and (3) Hot spot thermocouple arrays. Further work is needed to refine and verify the dynamic simulations and control algorithms, to advance sensor capabilities, and to develop the Health Monitoring computational methods.
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claiborne, H.C.; Wagner, R.S.; Just, R.A.
1979-12-01
A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or frommore » some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raboin, P J
1998-01-01
The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D.more » Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.« less
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Badavi, F. F.; Badhwar, G. D.
1996-01-01
We present calculations of linear energy transfer (LET) spectra in low earth orbit from galactic cosmic rays and trapped protons using the HZETRN/BRYNTRN computer code. The emphasis of our calculations is on the analysis of the effects of secondary nuclei produced through target fragmentation in the spacecraft shield or detectors. Recent improvements in the HZETRN/BRYNTRN radiation transport computer code are described. Calculations show that at large values of LET (> 100 keV/micrometer) the LET spectra seen in free space and low earth orbit (LEO) are dominated by target fragments and not the primary nuclei. Although the evaluation of microdosimetric spectra is not considered here, calculations of LET spectra support that the large lineal energy (y) events are dominated by the target fragments. Finally, we discuss the situation for interplanetary exposures to galactic cosmic rays and show that current radiation transport codes predict that in the region of high LET values the LET spectra at significant shield depths (> 10 g/cm2 of Al) is greatly modified by target fragments. These results suggest that studies of track structure and biological response of space radiation should place emphasis on short tracks of medium charge fragments produced in the human body by high energy protons and neutrons.
Fluid management technology: Liquid slosh dynamics and control
NASA Technical Reports Server (NTRS)
Dodge, Franklin T.; Green, Steven T.; Kana, Daniel D.
1991-01-01
Flight experiments were defined for the Cryogenic On-Orbit Liquid Depot Storage, Acquisition and Transfer Satellite (COLD-SAT) test bed satellite and the Shuttle middeck to help establish the influence of the gravitational environment on liquid slosh dynamics and control. Several analytical and experimental studies were also conducted to support the experiments and to help understand the anticipated results. Both FLOW-3D and NASA-VOF3D computer codes were utilized to simulate low Bond number, small amplitude sloshing, for which the motions are dominated by surface forces; it was found that neither code provided a satisfactory simulation. Thus, a new analysis of low Bond number sloshing was formulated, using an integral minimization technique that will allow the assumptions made about surface physics phenomena to be modified easily when better knowledge becomes available from flight experiments. Several examples were computed by the innovative use of a finite-element structural code. An existing spherical-pendulum analogy of nonlinear, rotary sloshing was also modified for easier use and extended to low-gravity conditions. Laboratory experiments were conducted to determine the requirements for liquid-vapor interface sensors as a method of resolving liquid surface motions in flight experiments. The feasibility of measuring the small slosh forces anticipated in flight experiments was also investigated.
Fluid management technology: Liquid slosh dynamics and control
NASA Astrophysics Data System (ADS)
Dodge, Franklin T.; Green, Steven T.; Kana, Daniel D.
1991-11-01
Flight experiments were defined for the Cryogenic On-Orbit Liquid Depot Storage, Acquisition and Transfer Satellite (COLD-SAT) test bed satellite and the Shuttle middeck to help establish the influence of the gravitational environment on liquid slosh dynamics and control. Several analytical and experimental studies were also conducted to support the experiments and to help understand the anticipated results. Both FLOW-3D and NASA-VOF3D computer codes were utilized to simulate low Bond number, small amplitude sloshing, for which the motions are dominated by surface forces; it was found that neither code provided a satisfactory simulation. Thus, a new analysis of low Bond number sloshing was formulated, using an integral minimization technique that will allow the assumptions made about surface physics phenomena to be modified easily when better knowledge becomes available from flight experiments. Several examples were computed by the innovative use of a finite-element structural code. An existing spherical-pendulum analogy of nonlinear, rotary sloshing was also modified for easier use and extended to low-gravity conditions. Laboratory experiments were conducted to determine the requirements for liquid-vapor interface sensors as a method of resolving liquid surface motions in flight experiments. The feasibility of measuring the small slosh forces anticipated in flight experiments was also investigated.
Assessment of the TRACE Reactor Analysis Code Against Selected PANDA Transient Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavisca, M.; Ghaderi, M.; Khatib-Rahbar, M.
2006-07-01
The TRACE (TRAC/RELAP Advanced Computational Engine) code is an advanced, best-estimate thermal-hydraulic program intended to simulate the transient behavior of light-water reactor systems, using a two-fluid (steam and water, with non-condensable gas), seven-equation representation of the conservation equations and flow-regime dependent constitutive relations in a component-based model with one-, two-, or three-dimensional elements, as well as solid heat structures and logical elements for the control system. The U.S. Nuclear Regulatory Commission is currently supporting the development of the TRACE code and its assessment against a variety of experimental data pertinent to existing and evolutionary reactor designs. This paper presents themore » results of TRACE post-test prediction of P-series of experiments (i.e., tests comprising the ISP-42 blind and open phases) conducted at the PANDA large-scale test facility in 1990's. These results show reasonable agreement with the reported test results, indicating good performance of the code and relevant underlying thermal-hydraulic and heat transfer models. (authors)« less
NASA Astrophysics Data System (ADS)
Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi; Matsufuru, Hideo; Imakura, Akira
2017-04-01
We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity part is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grebennikov, A.N.; Zhitnik, A.K.; Zvenigorodskaya, O.A.
1995-12-31
In conformity with the protocol of the Workshop under Contract {open_quotes}Assessment of RBMK reactor safety using modern Western Codes{close_quotes} VNIIEF performed a neutronics computation series to compare western and VNIIEF codes and assess whether VNIIEF codes are suitable for RBMK type reactor safety assessment computation. The work was carried out in close collaboration with M.I. Rozhdestvensky and L.M. Podlazov, NIKIET employees. The effort involved: (1) cell computations with the WIMS, EKRAN codes (improved modification of the LOMA code) and the S-90 code (VNIIEF Monte Carlo). Cell, polycell, burnup computation; (2) 3D computation of static states with the KORAT-3D and NEUmore » codes and comparison with results of computation with the NESTLE code (USA). The computations were performed in the geometry and using the neutron constants presented by the American party; (3) 3D computation of neutron kinetics with the KORAT-3D and NEU codes. These computations were performed in two formulations, both being developed in collaboration with NIKIET. Formulation of the first problem maximally possibly agrees with one of NESTLE problems and imitates gas bubble travel through a core. The second problem is a model of the RBMK as a whole with imitation of control and protection system controls (CPS) movement in a core.« less
Application of a single-flicker online SSVEP BCI for spatial navigation.
Chen, Jingjing; Zhang, Dan; Engel, Andreas K; Gong, Qin; Maye, Alexander
2017-01-01
A promising approach for brain-computer interfaces (BCIs) employs the steady-state visual evoked potential (SSVEP) for extracting control information. Main advantages of these SSVEP BCIs are a simple and low-cost setup, little effort to adjust the system parameters to the user and comparatively high information transfer rates (ITR). However, traditional frequency-coded SSVEP BCIs require the user to gaze directly at the selected flicker stimulus, which is liable to cause fatigue or even photic epileptic seizures. The spatially coded SSVEP BCI we present in this article addresses this issue. It uses a single flicker stimulus that appears always in the extrafoveal field of view, yet it allows the user to control four control channels. We demonstrate the embedding of this novel SSVEP stimulation paradigm in the user interface of an online BCI for navigating a 2-dimensional computer game. Offline analysis of the training data reveals an average classification accuracy of 96.9±1.64%, corresponding to an information transfer rate of 30.1±1.8 bits/min. In online mode, the average classification accuracy reached 87.9±11.4%, which resulted in an ITR of 23.8±6.75 bits/min. We did not observe a strong relation between a subject's offline and online performance. Analysis of the online performance over time shows that users can reliably control the new BCI paradigm with stable performance over at least 30 minutes of continuous operation.
Program for the analysis of time series. [by means of fast Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Brown, T. J.; Brown, C. G.; Hardin, J. C.
1974-01-01
A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.
Branson: A Mini-App for Studying Parallel IMC, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Alex
This code solves the gray thermal radiative transfer (TRT) equations in parallel using simple opacities and Cartesian meshes. Although Branson solves the TRT equations it is not designed to model radiation transport: Branson contains simple physics and does not have a multigroup treatment, nor can it use physical material data. The opacities have are simple polynomials in temperature there is a limited ability to specify complex geometries and sources. Branson was designed only to capture the computational demands of production IMC codes, especially in large parallel runs. It was also intended to foster collaboration with vendors, universities and other DOEmore » partners. Branson is similar in character to the neutron transport proxy-app Quicksilver from LLNL, which was recently open-sourced.« less
A complexity-scalable software-based MPEG-2 video encoder.
Chen, Guo-bin; Lu, Xin-ning; Wang, Xing-guo; Liu, Ji-lin
2004-05-01
With the development of general-purpose processors (GPP) and video signal processing algorithms, it is possible to implement a software-based real-time video encoder on GPP, and its low cost and easy upgrade attract developers' interests to transfer video encoding from specialized hardware to more flexible software. In this paper, the encoding structure is set up first to support complexity scalability; then a lot of high performance algorithms are used on the key time-consuming modules in coding process; finally, at programming level, processor characteristics are considered to improve data access efficiency and processing parallelism. Other programming methods such as lookup table are adopted to reduce the computational complexity. Simulation results showed that these ideas could not only improve the global performance of video coding, but also provide great flexibility in complexity regulation.
MAX - An advanced parallel computer for space applications
NASA Technical Reports Server (NTRS)
Lewis, Blair F.; Bunker, Robert L.
1991-01-01
MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.
OCTGRAV: Sparse Octree Gravitational N-body Code on Graphics Processing Units
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Bédorf, Jeroen; Portegies Zwart, Simon
2010-10-01
Octgrav is a very fast tree-code which runs on massively parallel Graphical Processing Units (GPU) with NVIDIA CUDA architecture. The algorithms are based on parallel-scan and sort methods. The tree-construction and calculation of multipole moments is carried out on the host CPU, while the force calculation which consists of tree walks and evaluation of interaction list is carried out on the GPU. In this way, a sustained performance of about 100GFLOP/s and data transfer rates of about 50GB/s is achieved. It takes about a second to compute forces on a million particles with an opening angle of heta approx 0.5. To test the performance and feasibility, we implemented the algorithms in CUDA in the form of a gravitational tree-code which completely runs on the GPU. The tree construction and traverse algorithms are portable to many-core devices which have support for CUDA or OpenCL programming languages. The gravitational tree-code outperforms tuned CPU code during the tree-construction and shows a performance improvement of more than a factor 20 overall, resulting in a processing rate of more than 2.8 million particles per second. The code has a convenient user interface and is freely available for use.
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
TORUS: Radiation transport and hydrodynamics code
NASA Astrophysics Data System (ADS)
Harries, Tim
2014-04-01
TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Investigation of Conjugate Heat Transfer in Turbine Blades and Vanes
NASA Technical Reports Server (NTRS)
Kassab, A. J.; Kapat, J. S.
2001-01-01
We report on work carried out to develop a 3-D coupled Finite Volume/BEM-based temperature forward/flux back (TFFB) coupling algorithm to solve the conjugate heat transfer (CHT) which arises naturally in analysis of systems exposed to a convective environment. Here, heat conduction within a structure is coupled to heat transfer to the external fluid which is convecting heat into or out of the solid structure. There are two basic approaches to solving coupled fluid structural systems. The first is a direct coupling where the solution of the different fields is solved simultaneously in one large set of equations. The second approach is a loose coupling strategy where each set of field equations is solved to provide boundary conditions for the other. The equations are solved in turn until an iterated convergence criterion is met at the fluid-solid interface. The loose coupling strategy is particularly attractive when coupling auxiliary field equations to computational fluid dynamics codes. We adopt the latter method in which the BEM is used to solve heat conduction inside a structure which is exposed to a convective field which in turn is resolved by solving the NASA Glenn compressible Navier-Stokes finite volume code Glenn-HT. The BEM code features constant and bi-linear discontinuous elements and an ILU-preconditioned GMRES iterative solver for the resulting non-symmetric algebraic set arising in the conduction solution. Interface of flux and temperature is enforced at the solid/fluid interface, and a radial-basis function scheme is used to interpolated information between the CFD and BEM surface grids. Additionally, relaxation is implemented in passing the fluxes from the conduction solution to the fluid solution. Results from a simple test example are reported.
Calculation of Multistage Turbomachinery Using Steady Characteristic Boundary Conditions
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.
1998-01-01
A multiblock Navier-Stokes analysis code for turbomachinery has been modified to allow analysis of multistage turbomachines. A steady averaging-plane approach was used to pass information between blade rows. Characteristic boundary conditions written in terms of perturbations about the mean flow from the neighboring blade row were used to allow close spacing between the blade rows without forcing the flow to be axisymmetric. In this report the multiblock code is described briefly and the characteristic boundary conditions and the averaging-plane implementation are described in detail. Two approaches for averaging the flow properties are also described. A two-dimensional turbine stator case was used to compare the characteristic boundary conditions with standard axisymmetric boundary conditions. Differences were apparent but small in this low-speed case. The two-stage fuel turbine used on the space shuttle main engines was then analyzed using a three-dimensional averaging-plane approach. Computed surface pressure distributions on the stator blades and endwalls and computed distributions of blade surface heat transfer coefficient on three blades showed very good agreement with experimental data from two tests.
An open source device for operant licking in rats.
Longley, Matthew; Willis, Ethan L; Tay, Cindy X; Chen, Hao
2017-01-01
We created an easy-to-use device for operant licking experiments and another device that records environmental variables. Both devices use the Raspberry Pi computer to obtain data from multiple input devices (e.g., radio frequency identification tag readers, touch and motion sensors, environmental sensors) and activate output devices (e.g., LED lights, syringe pumps) as needed. Data gathered from these devices are stored locally on the computer but can be automatically transferred to a remote server via a wireless network. We tested the operant device by training rats to obtain either sucrose or water under the control of a fixed ratio, a variable ratio, or a progressive ratio reinforcement schedule. The lick data demonstrated that the device has sufficient precision and time resolution to record the fast licking behavior of rats. Data from the environment monitoring device also showed reliable measurements. By providing the source code and 3D design under an open source license, we believe these examples will stimulate innovation in behavioral studies. The source code can be found at http://github.com/chen42/openbehavior.
An open source device for operant licking in rats
Longley, Matthew; Willis, Ethan L.; Tay, Cindy X.
2017-01-01
We created an easy-to-use device for operant licking experiments and another device that records environmental variables. Both devices use the Raspberry Pi computer to obtain data from multiple input devices (e.g., radio frequency identification tag readers, touch and motion sensors, environmental sensors) and activate output devices (e.g., LED lights, syringe pumps) as needed. Data gathered from these devices are stored locally on the computer but can be automatically transferred to a remote server via a wireless network. We tested the operant device by training rats to obtain either sucrose or water under the control of a fixed ratio, a variable ratio, or a progressive ratio reinforcement schedule. The lick data demonstrated that the device has sufficient precision and time resolution to record the fast licking behavior of rats. Data from the environment monitoring device also showed reliable measurements. By providing the source code and 3D design under an open source license, we believe these examples will stimulate innovation in behavioral studies. The source code can be found at http://github.com/chen42/openbehavior. PMID:28229020
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Radiative transfer code SHARM for atmospheric and terrestrial applications
NASA Astrophysics Data System (ADS)
Lyapustin, A. I.
2005-12-01
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Radiative transfer code SHARM for atmospheric and terrestrial applications.
Lyapustin, A I
2005-12-20
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun
We present a newly developed moving-mesh technique for the multi-dimensional Boltzmann-Hydro code for the simulation of core-collapse supernovae (CCSNe). What makes this technique different from others is the fact that it treats not only hydrodynamics but also neutrino transfer in the language of the 3 + 1 formalism of general relativity (GR), making use of the shift vector to specify the time evolution of the coordinate system. This means that the transport part of our code is essentially general relativistic, although in this paper it is applied only to the moving curvilinear coordinates in the flat Minknowski spacetime, since the gravity partmore » is still Newtonian. The numerical aspect of the implementation is also described in detail. Employing the axisymmetric two-dimensional version of the code, we conduct two test computations: oscillations and runaways of proto-neutron star (PNS). We show that our new method works fine, tracking the motions of PNS correctly. We believe that this is a major advancement toward the realistic simulation of CCSNe.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. L. Williamson
A powerful multidimensional fuels performance analysis capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth, gap heat transfer, and gap/plenum gas behavior during irradiation. This new capability is demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multipellet fuel rod, during both steady and transient operation. Comparisons are made between discrete andmore » smeared-pellet simulations. Computational results demonstrate the importance of a multidimensional, multipellet, fully-coupled thermomechanical approach. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermomechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.« less
NASA Astrophysics Data System (ADS)
Li, Yonghui; Ullrich, Carsten
2013-03-01
The time-dependent transition density matrix (TDM) is a useful tool to visualize and interpret the induced charges and electron-hole coherences of excitonic processes in large molecules. Combined with time-dependent density functional theory on a real-space grid (as implemented in the octopus code), the TDM is a computationally viable visualization tool for optical excitation processes in molecules. It provides real-time maps of particles and holes which gives information on excitations, in particular those that have charge-transfer character, that cannot be obtained from the density alone. Some illustration of the TDM and comparison with standard density difference plots will be shown for photoexcited organic donor-acceptor molecules. This work is supported by NSF Grant DMR-1005651
Lattice Methods and the Nuclear Few- and Many-Body Problem
NASA Astrophysics Data System (ADS)
Lee, Dean
This chapter builds upon the review of lattice methods and effective field theory of the previous chapter. We begin with a brief overview of lattice calculations using chiral effective field theory and some recent applications. We then describe several methods for computing scattering on the lattice. After that we focus on the main goal, explaining the theory and algorithms relevant to lattice simulations of nuclear few- and many-body systems. We discuss the exact equivalence of four different lattice formalisms, the Grassmann path integral, transfer matrix operator, Grassmann path integral with auxiliary fields, and transfer matrix operator with auxiliary fields. Along with our analysis we include several coding examples and a number of exercises for the calculations of few- and many-body systems at leading order in chiral effective field theory.
Rosenthal, Jennifer L; Okumura, Megumi J; Hernandez, Lenore; Li, Su-Ting T; Rehm, Roberta S
2016-01-01
Children with special health care needs often require health services that are only provided at subspecialty centers. Such children who present to nonspecialty hospitals might require a hospital-to-hospital transfer. When transitioning between medical settings, communication is an integral aspect that can affect the quality of patient care. The objectives of the study were to identify barriers and facilitators to effective interfacility pediatric transfer communication to general pediatric floors from the perspectives of referring and accepting physicians, and then develop a conceptual model for effective interfacility transfer communication. This was a single-center qualitative study using grounded theory methodology. Referring and accepting physicians of children with special health care needs were interviewed. Four researchers coded the data using ATLAS.ti (version 7, Scientific Software Development GMBH, Berlin, Germany), using a 2-step process of open coding, followed by focused coding until no new codes emerged. The research team reached consensus on the final major categories and subsequently developed a conceptual model. Eight referring and 9 accepting physicians were interviewed. Theoretical coding resulted in 3 major categories: streamlined transfer process, quality handoff and 2-way communication, and positive relationships between physicians across facilities. The conceptual model unites these categories and shows how these categories contribute to effective interfacility transfer communication. Proposed interventions involved standardizing the communication process and incorporating technology such as telemedicine during transfers. Communication is perceived to be an integral component of interfacility transfers. We recommend that transfer systems be re-engineered to make the process more streamlined, to improve the quality of the handoff and 2-way communication, and to facilitate positive relationships between physicians across facilities. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Incorporation of Condensation Heat Transfer in a Flow Network Code
NASA Technical Reports Server (NTRS)
Anthony, Miranda; Majumdar, Alok; McConnaughey, Paul K. (Technical Monitor)
2001-01-01
In this paper we have investigated the condensation of water vapor in a short tube. A numerical model of condensation heat transfer was incorporated in a flow network code. The flow network code that we have used in this paper is Generalized Fluid System Simulation Program (GFSSP). GFSSP is a finite volume based flow network code. Four different condensation models were presented in the paper. Soliman's correlation has been found to be the most stable in low flow rates which is of particular interest in this application. Another highlight of this investigation is conjugate or coupled heat transfer between solid or fluid. This work was done in support of NASA's International Space Station program.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
Why Do Elephants Flap Their Ears?
NASA Astrophysics Data System (ADS)
Koffi, Moise; Jiji, Latif; Andreopoulos, Yiannis
2009-11-01
It is estimated that a 4200 kg elephant generates as much as 5.12 kW of heat. How the elephant dissipates its metabolic heat and regulates its body temperature has been investigated during the past seven decades. Findings and conclusions differ sharply. The high rate of metabolic heat coupled with low surface area to volume ratio and the absence of sweat glands eliminate surface convection as the primary mechanism for heat removal. Noting that the elephant ears have high surface area to volume ratio and an extensive vascular network, ear flapping is thought to be the principal thermoregulatory mechanism. A computational and experimental program is carried out to examine flow and heat transfer characteristics. The ear is modeled as a uniformly heated oscillating rectangular plate. Our computational work involves a three-dimensional time dependent CFD code with heat transfer capabilities to obtain predictions of the flow field and surface temperature distributions. This information was used to design an experimental setup with a uniformly heated plate of size 0.2m x 0.3m oscillating at 1.6 cycles per second. Results show that surface temperature increases and reaches a steady periodic oscillation after a period of transient oscillation. The role of the vortices shed off the plate in heat transfer enhancement will be discussed.
Implicit time-integration method for simultaneous solution of a coupled non-linear system
NASA Astrophysics Data System (ADS)
Watson, Justin Kyle
Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).
Certification of CFD heat transfer software for turbine blade analysis
NASA Technical Reports Server (NTRS)
Jordan, William A.
2004-01-01
Accurate modeling of heat transfer effects is a critical component of the Turbine Branch of the Turbomachinery and Propulsion Systems Division. Being able to adequately predict and model heat flux, coolant flows, and peak temperatures are necessary for the analysis of high pressure turbine blades. To that end, the primary goal of my internship this summer will be to certify the reliability of the CFD program GlennHT for the purpose of turbine blade heat transfer analysis. GlennHT is currently in use by the engineers in the Turbine Branch who use the FORTRAN 77 version of the code for analysis. The program, however, has been updated to a FORTRAN 90 version which is more robust than the older code. In order for the new code to be distributed for use, its reliability must first be certified. Over the course of my internship I will create and run test cases using the FORTRAN 90 version of GlennHT and compare the results to older cases which are known to be accurate, If the results of the new code match those of the sample cases then the newer version will be one step closer to certification for distribution. In order to complete these it will first be necessary to become familiar with operating a number of other programs. Among them are GridPro, which is used to create a grid mesh around a blade geometry, and FieldView, whose purpose is to graphically display the results from the GlennHT program. Once enough familiarity is established with these programs to render them useful, then the work of creating and running test scenarios will begin. The work is additionally complicated by a transition in computer hardware. Most of the working computers in the Turbine Branch are Silicon Graphics machines, which will soon be replaced by LINUX PC's. My project is one of the first to make use the new PC's. The change in system architecture however, has created several software related issues which have greatly increased the time and effort investments required by the project.Although complications with the project continue to arise, it is expected that the goal of my internship can still be achieved within the remaining time period. Critical steps have been achieved and test scenarios can now be designed and run. At the completion of my internship, the FORTRAN 90 version of GlennHT should be well on its way to certification.
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.
2015-10-01
Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earth's climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor,ozone, ozygen, carbon dioxide, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire longwave spectrum.In this paper, we present our results of optimizing the Goddard longwave radiative transfer scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The coprocessor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of MICs will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The optimizations improved the performance of the original Goddard longwave radiative transfer scheme on Xeon Phi 7120P by a factor of 2.2x. Furthermore, the same optimizations improved the performance of the Goddard longwave radiative transfer scheme on a dual socket configuration of eight core Intel Xeon E5-2670 CPUs by a factor of 2.1x compared to the original Goddard longwave radiative transfer scheme code.
FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code
Hakel, Peter
2016-10-01
Here we report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Singhal, A. K.; Tam, L. T.
1984-01-01
The capability of simulating three dimensional two phase reactive flows with combustion in the liquid fuelled rocket engines is demonstrated. This was accomplished by modifying an existing three dimensional computer program (REFLAN3D) with Eulerian Lagrangian approach to simulate two phase spray flow, evaporation and combustion. The modified code is referred as REFLAN3D-SPRAY. The mathematical formulation of the fluid flow, heat transfer, combustion and two phase flow interaction of the numerical solution procedure, boundary conditions and their treatment are described.
A computational study of low-head direct chill slab casting of aluminum alloy AA2024
NASA Astrophysics Data System (ADS)
Hasan, Mainul; Begum, Latifa
2016-04-01
The steady state casting of an industrial-sized AA2024 slab has been modeled for a vertical low-head direct chill caster. The previously verified 3-D CFD code is used to investigate the solidification phenomena of the said long-range alloy by varying the pouring temperature, casting speed and the metal-mold contact heat transfer coefficient from 654 to 702 °C, 60-180 mm/min, and 1.0-4.0 kW/(m2 K), respectively. The important predicted results are presented and thoroughly discussed.
Progress of Stirling cycle analysis and loss mechanism characterization
NASA Technical Reports Server (NTRS)
Tew, R. C., Jr.
1986-01-01
An assessment of Stirling engine thermodynamic modeling and design codes shows a general deficiency; this deficiency is due to poor understanding of the fluid flow and heat transfer phenomena that occur in the oscillating flow and pressure level environment within the engines. Stirling engine thermodynamic loss mechanisms are listed. Several experimental and computational research efforts now underway to characterize various loss mechanisms are reviewed. The need for additional experimental rigs and rig upgrades is discussed. Recent developments and current efforts in Stirling engine thermodynamic modeling are also reviewed.
Lattice QCD calculation using VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seyong; Ohta, Shigemi
1995-02-01
A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6 GFLOPS peak speed and 256 MB memory, connected by a crossbar switch with 400 MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1 GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8 GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix.
FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code
NASA Astrophysics Data System (ADS)
Hakel, Peter
2016-10-01
We report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.
Plummer, Niel; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.
1988-01-01
The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)
Malleable architecture generator for FPGA computing
NASA Astrophysics Data System (ADS)
Gokhale, Maya; Kaba, James; Marks, Aaron; Kim, Jang
1996-10-01
The malleable architecture generator (MARGE) is a tool set that translates high-level parallel C to configuration bit streams for field-programmable logic based computing systems. MARGE creates an application-specific instruction set and generates the custom hardware components required to perform exactly those computations specified by the C program. In contrast to traditional fixed-instruction processors, MARGE's dynamic instruction set creation provides for efficient use of hardware resources. MARGE processes intermediate code in which each operation is annotated by the bit lengths of the operands. Each basic block (sequence of straight line code) is mapped into a single custom instruction which contains all the operations and logic inherent in the block. A synthesis phase maps the operations comprising the instructions into register transfer level structural components and control logic which have been optimized to exploit functional parallelism and function unit reuse. As a final stage, commercial technology-specific tools are used to generate configuration bit streams for the desired target hardware. Technology- specific pre-placed, pre-routed macro blocks are utilized to implement as much of the hardware as possible. MARGE currently supports the Xilinx-based Splash-2 reconfigurable accelerator and National Semiconductor's CLAy-based parallel accelerator, MAPA. The MARGE approach has been demonstrated on systolic applications such as DNA sequence comparison.
Heterogeneous scalable framework for multiphase flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, Karla Vanessa
2013-09-01
Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computermore » platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.« less
Generalized Fluid System Simulation Program, Version 6.0
NASA Technical Reports Server (NTRS)
Majumdar, A. K.; LeClair, A. C.; Moore, A.; Schallhorn, P. A.
2013-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependant flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 25 demonstrated example problems.
Generalized Fluid System Simulation Program, Version 5.0-Educational
NASA Technical Reports Server (NTRS)
Majumdar, A. K.
2011-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors and external body forces such as gravity and centrifugal. The thermofluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the point, drag and click method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids and 21 different resistance/source options are provided for modeling momentum sources or sinks in the branches. This Technical Memorandum illustrates the application and verification of the code through 12 demonstrated example problems.
Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.
Ruymgaart, A Peter; Elber, Ron
2012-11-13
We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).
QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation
NASA Astrophysics Data System (ADS)
Samana, A. R.; Krmpotić, F.; Bertulani, C. A.
2010-06-01
A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.
Integrated computer-aided design using minicomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.
1980-01-01
Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
RIECK, C.A.
1999-02-23
This Software Configuration Management Plan (SCMP) provides the instructions for change control of the W-211 Project, Retrieval Control System (RCS) software after initial approval/release but prior to the transfer of custody to the waste tank operations contractor. This plan applies to the W-211 system software developed by the project, consisting of the computer human-machine interface (HMI) and programmable logic controller (PLC) software source and executable code, for production use by the waste tank operations contractor. The plan encompasses that portion of the W-211 RCS software represented on project-specific AUTOCAD drawings that are released as part of the C1 definitive designmore » package (these drawings are identified on the drawing list associated with each C-1 package), and the associated software code. Implementation of the plan is required for formal acceptance testing and production release. The software configuration management plan does not apply to reports and data generated by the software except where specifically identified. Control of information produced by the software once it has been transferred for operation is the responsibility of the receiving organization.« less
Characterization of Fuego for laminar and turbulent natural convection heat transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Francis, Nicholas Donald, Jr.; .)
2005-08-01
A computational fluid dynamics (CFD) analysis is conducted for internal natural convection heat transfer using the low Mach number code Fuego. The flow conditions under investigation are primarily laminar, transitional, or low-intensity level turbulent flows. In the case of turbulent boundary layers at low-level turbulence or transitional Reynolds numbers, the use of standard wall functions no longer applies, in general, for wall-bounded flows. One must integrate all the way to the wall in order to account for gradients in the dependent variables in the viscous sublayer. Fuego provides two turbulence models in which resolution of the near-wall region is appropriate.more » These models are the v2-f turbulence model and a Launder-Sharma, low-Reynolds number turbulence model. Two standard geometries are considered: the annulus formed between horizontal concentric cylinders and a square enclosure. Each geometry emphasizes wall shear flow and complexities associated with turbulent or near turbulent boundary layers in contact with a motionless core fluid. Overall, the Fuego simulations for both laminar and turbulent flows compared well to measured data, for both geometries under investigation, and to a widely accepted commercial CFD code (FLUENT).« less
NASA Technical Reports Server (NTRS)
Liffman, Kurt
1990-01-01
The effects of catastrophic collisional fragmentation and diffuse medium accretion on a the interstellar dust system are computed using a Monte Carlo computer model. The Monte Carlo code has as its basis an analytic solution of the bulk chemical evolution of a two-phase interstellar medium, described by Liffman and Clayton (1989). The model is subjected to numerous different interstellar processes as it transfers from one interstellar phase to another. Collisional fragmentation was found to be the dominant physical process that shapes the size spectrum of interstellar dust. It was found that, in the diffuse cloud phase, 90 percent of the refractory material is locked up in the dust grains, primarily due to accretion in the molecular medium. This result is consistent with the observed depletions of silicon. Depletions were found to be affected only slightly by diffuse cloud accretion.
Frame-Transfer Gating Raman Spectroscopy for Time-Resolved Multiscalar Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Fischer, David G.; Kojima, Jun
2011-01-01
Accurate experimental measurement of spatially and temporally resolved variations in chemical composition (species concentrations) and temperature in turbulent flames is vital for characterizing the complex phenomena occurring in most practical combustion systems. These diagnostic measurements are called multiscalar because they are capable of acquiring multiple scalar quantities simultaneously. Multiscalar diagnostics also play a critical role in the area of computational code validation. In order to improve the design of combustion devices, computational codes for modeling turbulent combustion are often used to speed up and optimize the development process. The experimental validation of these codes is a critical step in accepting their predictions for engine performance in the absence of cost-prohibitive testing. One of the most critical aspects of setting up a time-resolved stimulated Raman scattering (SRS) diagnostic system is the temporal optical gating scheme. A short optical gate is necessary in order for weak SRS signals to be detected with a good signal- to-noise ratio (SNR) in the presence of strong background optical emissions. This time-synchronized optical gating is a classical problem even to other spectroscopic techniques such as laser-induced fluorescence (LIF) or laser-induced breakdown spectroscopy (LIBS). Traditionally, experimenters have had basically two options for gating: (1) an electronic means of gating using an image intensifier before the charge-coupled-device (CCD), or (2) a mechanical optical shutter (a rotary chopper/mechanical shutter combination). A new diagnostic technology has been developed at the NASA Glenn Research Center that utilizes a frame-transfer CCD sensor, in conjunction with a pulsed laser and multiplex optical fiber collection, to realize time-resolved Raman spectroscopy of turbulent flames that is free from optical background noise (interference). The technology permits not only shorter temporal optical gating (down to <1 s, in principle), but also higher optical throughput, thus resulting in a substantial increase in measurement SNR.
HELIOS: A new open-source radiative transfer code
NASA Astrophysics Data System (ADS)
Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin
2015-12-01
I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net
Heat Transfer on a Film-Cooled Rotating Blade
NASA Technical Reports Server (NTRS)
Garg, Vijay K.
1999-01-01
A multi-block, three-dimensional Navier-Stokes code has been used to compute heat transfer coefficient on the blade, hub and shroud for a rotating high-pressure turbine blade with 172 film-cooling holes in eight rows. Film cooling effectiveness is also computed on the adiabatic blade. Wilcox's k-omega model is used for modeling the turbulence. Of the eight rows of holes, three are staggered on the shower-head with compound-angled holes. With so many holes on the blade it was somewhat of a challenge to get a good quality grid on and around the blade and in the tip clearance region. The final multi-block grid consists of 4784 elementary blocks which were merged into 276 super blocks. The viscous grid has over 2.2 million cells. Each hole exit, in its true oval shape, has 80 cells within it so that coolant velocity, temperature, k and omega distributions can be specified at these hole exits. It is found that for the given parameters, heat transfer coefficient on the cooled, isothermal blade is highest in the leading edge region and in the tip region. Also, the effectiveness over the cooled, adiabatic blade is the lowest in these regions. Results for an uncooled blade are also shown, providing a direct comparison with those for the cooled blade. Also, the heat transfer coefficient is much higher on the shroud as compared to that on the hub for both the cooled and the uncooled cases.
HRB-22 preirradiation thermal analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, R.; Sawa, K.
1995-05-01
This report describes the preirradiation thermal analysis of the HRB-22 capsule designed for irradiation in the removable beryllium (RB) position of the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). CACA-2 a heavy isotope and fission product concentration calculational code for experimental irradiation capsules was used to determine time dependent fission power for the fuel compacts. The Heat Engineering and Transfer in Nine Geometries (HEATING) computer code, version 7.2, was used to solve the steady-state heat conduction problem. The diameters of the graphite fuel body that contains the compacts and the primary pressure vessel were selected suchmore » that the requirements of running the compacts at an average temperature of < 1,250 C and not exceeding a maximum fuel temperature of 1,350 C was met throughout the four cycles of irradiation.« less
NASA Astrophysics Data System (ADS)
Humeniuk, Alexander; Mitrić, Roland
2017-12-01
A software package, called DFTBaby, is published, which provides the electronic structure needed for running non-adiabatic molecular dynamics simulations at the level of tight-binding DFT. A long-range correction is incorporated to avoid spurious charge transfer states. Excited state energies, their analytic gradients and scalar non-adiabatic couplings are computed using tight-binding TD-DFT. These quantities are fed into a molecular dynamics code, which integrates Newton's equations of motion for the nuclei together with the electronic Schrödinger equation. Non-adiabatic effects are included by surface hopping. As an example, the program is applied to the optimization of excited states and non-adiabatic dynamics of polyfluorene. The python and Fortran source code is available at http://www.dftbaby.chemie.uni-wuerzburg.de.
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
NASA Astrophysics Data System (ADS)
Koepferl, Christine M.; Robitaille, Thomas P.
2017-11-01
When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied to compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory. Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.
Irradiation and Enhanced Magnetic Braking in Cataclysmic Variables
NASA Astrophysics Data System (ADS)
McCormick, P. J.; Frank, J.
1998-12-01
In previous work we have shown that irradiation driven mass transfer cycles can occur in cataclysmic variables at all orbital periods if an additional angular momentum loss mechanism is assumed. Earlier models simply postulated that the enhanced angular momentum loss was proportional to the mass transfer rate without any specific physical model. In this paper we present a simple modification of magnetic braking which seems to have the right properties to sustain irradiation driven cycles at all orbital periods. We assume that the wind mass loss from the irradiated companion consists of two parts: an intrinsic stellar wind term plus an enhancement that is proportional to the irradiation. The increase in mass flow reduces the specific angular momentum carried away by the flow but nevertheless yields an enhanced rate of magnetic braking. The secular evolution of the binary is then computed numerically with a suitably modified double polytropic code (McCormick & Frank 1998). With the above model and under certain conditions, mass transfer oscillations occur at all orbital periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P., E-mail: koepferl@usm.lmu.de
When modeling astronomical objects throughout the universe, it is important to correctly treat the limitations of the data, for instance finite resolution and sensitivity. In order to simulate these effects, and to make radiative transfer models directly comparable to real observations, we have developed an open-source Python package called the FluxCompensator that enables the post-processing of the output of 3D Monte Carlo radiative transfer codes, such as Hyperion. With the FluxCompensator, realistic synthetic observations can be generated by modeling the effects of convolution with arbitrary point-spread functions, transmission curves, finite pixel resolution, noise, and reddening. Pipelines can be applied tomore » compute synthetic observations that simulate observatories, such as the Spitzer Space Telescope or the Herschel Space Observatory . Additionally, this tool can read in existing observations (e.g., FITS format) and use the same settings for the synthetic observations. In this paper, we describe the package as well as present examples of such synthetic observations.« less
NASA Astrophysics Data System (ADS)
Smyth, Trevor; Menary, Gary; Geron, Marco
2018-05-01
Impingement of a liquid jet in a polymer cavity has been modelled numerically in this study. Liquid supported stretch blow moulding is a nascent polymer forming process using liquid as the forming medium to produce plastic bottles. The process derives from the conventional stretch blow moulding process which uses compressed air to deform the preform. Heat transfer away from the preform greatly increases when a liquid instead of a gas is flowing over a solid; in the blow moulding process the temperature of the preform is tightly controlled to achieve optimum forming conditions. A model was developed with Computational Fluid Dynamics code ANSYS Fluent which allows the extent of heat transfer between the incoming liquid and the solid preform to be determined in the initial transient stage, where a liquid jet enters an air filled preform. With this data, an approximation of the extent of cooling through the preform wall can be determined.
Endwall Heat Transfer Measurements in a Transonic Turbine Cascade
NASA Technical Reports Server (NTRS)
Giel, P. W.; Thurman, D. R.; VanFossen, G. J.; Hippensteele, S. A.; Boyle, R. J.
1996-01-01
Turbine blade endwall heat transfer measurements are given for a range of Reynolds and Mach numbers. Data were obtained for Reynolds numbers based on inlet conditions of 0.5 and 1.0 x 106, for isentropic exit Mach numbers of 1.0 and 1.3, and for freestream turbulence intensities of 0.25% and 7.0%. Tests were conducted in a linear cascade at the NASA Lewis Transonic Turbine Blade Cascade Facility. The test article was a turbine rotor with 136' of turning and an axial chord of 12.7 cm. The large scale allowed for very detailed measurements of both flow field and surface phenomena. The intent of the work is to provide benchmark quality data for computational fluid dynamics (CFD) code and model verification. The flow field in the cascade is highly three-dimensional as a result of thick boundary layers at the test section inlet. Endwall heat transfer data were obtained using a steady-state liquid crystal technique.
Sornborger, Andrew T.; Wang, Zhuo; Tao, Louis
2015-01-01
Neural oscillations can enhance feature recognition [1], modulate interactions between neurons [2], and improve learning and memory [3]. Numerical studies have shown that coherent spiking can give rise to windows in time during which information transfer can be enhanced in neuronal networks [4–6]. Unanswered questions are: 1) What is the transfer mechanism? And 2) how well can a transfer be executed? Here, we present a pulse-based mechanism by which a graded current amplitude may be exactly propagated from one neuronal population to another. The mechanism relies on the downstream gating of mean synaptic current amplitude from one population of neurons to another via a pulse. Because transfer is pulse-based, information may be dynamically routed through a neural circuit with fixed connectivity. We demonstrate the transfer mechanism in a realistic network of spiking neurons and show that it is robust to noise in the form of pulse timing inaccuracies, random synaptic strengths and finite size effects. We also show that the mechanism is structurally robust in that it may be implemented using biologically realistic pulses. The transfer mechanism may be used as a building block for fast, complex information processing in neural circuits. We show that the mechanism naturally leads to a framework wherein neural information coding and processing can be considered as a product of linear maps under the active control of a pulse generator. Distinct control and processing components combine to form the basis for the binding, propagation, and processing of dynamically routed information within neural pathways. Using our framework, we construct example neural circuits to 1) maintain a short-term memory, 2) compute time-windowed Fourier transforms, and 3) perform spatial rotations. We postulate that such circuits, with automatic and stereotyped control and processing of information, are the neural correlates of Crick and Koch’s zombie modes. PMID:26227067
Michael Frei, Dominik; Hodneland, Erlend; Rios-Mondragon, Ivan; Burtey, Anne; Neumann, Beate; Bulkescher, Jutta; Schölermann, Julia; Pepperkok, Rainer; Gerdes, Hans-Hermann; Kögel, Tanja
2015-01-01
Contact-dependent intercellular transfer (codeIT) of cellular constituents can have functional consequences for recipient cells, such as enhanced survival and drug resistance. Pathogenic viruses, prions and bacteria can also utilize this mechanism to spread to adjacent cells and potentially evade immune detection. However, little is known about the molecular mechanism underlying this intercellular transfer process. Here, we present a novel microscopy-based screening method to identify regulators and cargo of codeIT. Single donor cells, carrying fluorescently labelled endocytic organelles or proteins, are co-cultured with excess acceptor cells. CodeIT is quantified by confocal microscopy and image analysis in 3D, preserving spatial information. An siRNA-based screening using this method revealed the involvement of several myosins and small GTPases as codeIT regulators. Our data indicates that cellular protrusions and tubular recycling endosomes are important for codeIT. We automated image acquisition and analysis to facilitate large-scale chemical and genetic screening efforts to identify key regulators of codeIT. PMID:26271723
Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting
NASA Technical Reports Server (NTRS)
Abel, Phillip B.
1989-01-01
A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.
Coupled multi-disciplinary composites behavior simulation
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.; Murthy, Pappu L. N.; Chamis, Christos C.
1993-01-01
The capabilities of the computer code CSTEM (Coupled Structural/Thermal/Electro-Magnetic Analysis) are discussed and demonstrated. CSTEM computationally simulates the coupled response of layered multi-material composite structures subjected to simultaneous thermal, structural, vibration, acoustic, and electromagnetic loads and includes the effect of aggressive environments. The composite material behavior and structural response is determined at its various inherent scales: constituents (fiber/matrix), ply, laminate, and structural component. The thermal and mechanical properties of the constituents are considered to be nonlinearly dependent on various parameters such as temperature and moisture. The acoustic and electromagnetic properties also include dependence on vibration and electromagnetic wave frequencies, respectively. The simulation is based on a three dimensional finite element analysis in conjunction with composite mechanics and with structural tailoring codes, and with acoustic and electromagnetic analysis methods. An aircraft engine composite fan blade is selected as a typical structural component to demonstrate the CSTEM capabilities. Results of various coupled multi-disciplinary heat transfer, structural, vibration, acoustic, and electromagnetic analyses for temperature distribution, stress and displacement response, deformed shape, vibration frequencies, mode shapes, acoustic noise, and electromagnetic reflection from the fan blade are discussed for their coupled effects in hot and humid environments. Collectively, these results demonstrate the effectiveness of the CSTEM code in capturing the coupled effects on the various responses of composite structures subjected to simultaneous multiple real-life loads.
Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji
2012-01-01
Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.
Standard terminology and labeling of ocular tissue for transplantation.
Armitage, W John; Ashford, Paul; Crow, Barbara; Dahl, Patricia; DeMatteo, Jennifer; Distler, Pat; Gopinathan, Usha; Madden, Peter W; Mannis, Mark J; Moffatt, S Louise; Ponzin, Diego; Tan, Donald
2013-06-01
To develop an internationally agreed terminology for describing ocular tissue grafts to improve the accuracy and reliability of information transfer, to enhance tissue traceability, and to facilitate the gathering of comparative global activity data, including denominator data for use in biovigilance analyses. ICCBBA, the international standards organization for terminology, coding, and labeling of blood, cells, and tissues, approached the major Eye Bank Associations to form an expert advisory group. The group met by regular conference calls to develop a standard terminology, which was released for public consultation and amended accordingly. The terminology uses broad definitions (Classes) with modifying characteristics (Attributes) to define each ocular tissue product. The terminology may be used within the ISBT 128 system to label tissue products with standardized bar codes enabling the electronic capture of critical data in the collection, processing, and distribution of tissues. Guidance on coding and labeling has also been developed. The development of a standard terminology for ocular tissue marks an important step for improving traceability and reducing the risk of mistakes due to transcription errors. ISBT 128 computer codes have been assigned and may now be used to label ocular tissues. Eye banks are encouraged to adopt this standard terminology and move toward full implementation of ISBT 128 nomenclature, coding, and labeling.
Computer Description of Black Hawk Helicopter
1979-06-01
Model Combinatorial Geometry Models Black Hawk Helicopter Helicopter GIFT Computer Code Geometric Description of Targets 20. ABSTRACT...description was made using the technique of combinatorial geometry (COM-GEOM) and will be used as input to the GIFT computer code which generates Tliic...rnHp The data used bv the COVART comtmter code was eenerated bv the Geometric Information for Targets ( GIFT )Z computer code. This report documents
NASA Astrophysics Data System (ADS)
Pontoppidan, Klaus
Based on the observed distributions of exoplanets and dynamical models of their evolution, the primary planet-forming regions of protoplanetary disks are thought to span distances of 1-20 AU from typical stars. A key observational challenge of the next decade will be to understand the links between the formation of planets in protoplanetary disks and the chemical composition of exoplanets. Potentially habitable planets in particular are likely formed by solids growing within radii of a few AU, augmented by unknown contributions from volatiles formed at larger radii of 10-50 AU. The basic chemical composition of these inner disk regions is characterized by near- to far-infrared (2-200 micron) emission lines from molecular gas at temperatures of 50-1500 K. A critical step toward measuring the chemical composition of planet-forming regions is therefore to convert observed infrared molecular line fluxes, profiles and images to gas temperatures, densities and molecular abundances. However, current techniques typically employ approximate radiative transfer methods and assumptions of local thermodynamic equilibrium (LTE) to retrieve abundances, leading to uncertainties of orders of magnitude and inconclusive comparisons to chemical models. Ultimately, the scientific impact of the high quality spectroscopic data expected from the James Webb Space Telescope (JWST) will be limited by the availability of radiative transfer tools for infrared molecular lines. We propose to develop a numerically accurate, non-LTE 3D line radiative transfer code, needed to interpret mid-infrared molecular line observations of protoplanetary and debris disks in preparation for the James Webb Space Telescope (JWST). This will be accomplished by adding critical functionality to the existing Monte Carlo code LIME, which was originally developed to support (sub)millimeter interferometric observations. In contrast to existing infrared codes, LIME calculates the exact statistical balance of arbitrary collections of molecular lines, and does not use large velocity gradient (LVG) or escape probability approximations. However, to use LIME for infrared line radiative transfer, new functionality must be added and tested, such as dust scattering, UV fluorescence, and interfaces with public state-of-the art 3D dust radiative transfer codes (e.g., RADMC3D) and thermo-chemical codes (e.g, ProDiMo). Infrared transitions of molecules expected to be ubiquitous in JWST spectra currently do not have good databases applicable to astrophysical modeling and protoplanetary disks, including water, OH, CO2, NH3, CH4, HCN, etc. Obtaining accurate solutions of the non-LTE line transfer problem in 3D in the infrared is computationally intensive. We propose to benchmark the new code relative to existing, approximate methods to determine whether they are accurate, and under what conditions. We will also create conversion tables between mid-infrared line strengths of water, OH, CH4, NH3, CH3OH, CO2 and other species expected to be observed with JWST, and their relative abundances in planet-forming regions. We propose to apply the new IR-LIME to retrieve molecular abundances from archival and new spectroscopic observations with Spitzer/Herschel/Keck/VLT of CO, water, OH and organic molecules, and to publish comprehensive tables of retrieved molecular abundances in protoplanetary disks. The proposed research is relevant to the XRP call, since it addresses a critical step in inferring the chemical abundances of planet-forming material, which in turn can be compared to the observed compositions of exoplanets, thereby improving our understanding of the origins of exoplanetary systems. The proposed research is particularly timely as the first JWST science data are expected to become available toward the end of the three-year duration of the project.
Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki
2015-03-01
This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Multi-d CFD Modeling of a Free-piston Stirling Convertor at NASA Glenn
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Dyson, Rodger W.; Tew, Roy C.; Ibrahim, Mounir B.
2004-01-01
A high efficiency Stirling Radioisotope Generator (SRG) is being developed for possible use in long duration space science missions. NASA s advanced technology goals for next generation Stirling convertors include increasing the Carnot efficiency and percent of Carnot efficiency. To help achieve these goals, a multidimensional Computational Fluid Dynamics (CFD) code is being developed to numerically model unsteady fluid flow and heat transfer phenomena of the oscillating working gas inside Stirling convertors. Simulations of the Stirling convertors for the SRG will help characterize the thermodynamic losses resulting from fluid flow and heat transfer between the working gas and solid walls. The current CFD simulation represents approximated 2-dimensional convertor geometry. The simulation solves the Navier Stokes equations for an ideal helium gas oscillating at low speeds. The current simulation results are discussed.
Biocontainment of genetically modified organisms by synthetic protein design.
Mandell, Daniel J; Lajoie, Marc J; Mee, Michael T; Takeuchi, Ryo; Kuznetsov, Gleb; Norville, Julie E; Gregg, Christopher J; Stoddard, Barry L; Church, George M
2015-02-05
Genetically modified organisms (GMOs) are increasingly deployed at large scales and in open environments. Genetic biocontainment strategies are needed to prevent unintended proliferation of GMOs in natural ecosystems. Existing biocontainment methods are insufficient because they impose evolutionary pressure on the organism to eject the safeguard by spontaneous mutagenesis or horizontal gene transfer, or because they can be circumvented by environmentally available compounds. Here we computationally redesign essential enzymes in the first organism possessing an altered genetic code (Escherichia coli strain C321.ΔA) to confer metabolic dependence on non-standard amino acids for survival. The resulting GMOs cannot metabolically bypass their biocontainment mechanisms using known environmental compounds, and they exhibit unprecedented resistance to evolutionary escape through mutagenesis and horizontal gene transfer. This work provides a foundation for safer GMOs that are isolated from natural ecosystems by a reliance on synthetic metabolites.
Biocontainment of genetically modified organisms by synthetic protein design
NASA Astrophysics Data System (ADS)
Mandell, Daniel J.; Lajoie, Marc J.; Mee, Michael T.; Takeuchi, Ryo; Kuznetsov, Gleb; Norville, Julie E.; Gregg, Christopher J.; Stoddard, Barry L.; Church, George M.
2015-02-01
Genetically modified organisms (GMOs) are increasingly deployed at large scales and in open environments. Genetic biocontainment strategies are needed to prevent unintended proliferation of GMOs in natural ecosystems. Existing biocontainment methods are insufficient because they impose evolutionary pressure on the organism to eject the safeguard by spontaneous mutagenesis or horizontal gene transfer, or because they can be circumvented by environmentally available compounds. Here we computationally redesign essential enzymes in the first organism possessing an altered genetic code (Escherichia coli strain C321.ΔA) to confer metabolic dependence on non-standard amino acids for survival. The resulting GMOs cannot metabolically bypass their biocontainment mechanisms using known environmental compounds, and they exhibit unprecedented resistance to evolutionary escape through mutagenesis and horizontal gene transfer. This work provides a foundation for safer GMOs that are isolated from natural ecosystems by a reliance on synthetic metabolites.
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Coffey, Victoria N.; Parker, Linda N.; Blackwell, William C., Jr.; Jun, Insoo; Garrett, Henry B.
2007-01-01
The NUMIT 1-dimensional bulk charging model is used as a screening to ol for evaluating time-dependent bulk internal or deep dielectric) ch arging of dielectrics exposed to penetrating electron environments. T he code is modified to accept time dependent electron flux time serie s along satellite orbits for the electron environment inputs instead of using the static electron flux environment input originally used b y the code and widely adopted in bulk charging models. Application of the screening technique ts demonstrated for three cases of spacecraf t exposure within the Earth's radiation belts including a geostationa ry transfer orbit and an Earth-Moon transit trajectory for a range of orbit inclinations. Electric fields and charge densities are compute d for dielectric materials with varying electrical properties exposed to relativistic electron environments along the orbits. Our objectiv e is to demonstrate a preliminary application of the time-dependent e nvironments input to the NUMIT code for evaluating charging risks to exposed dielectrics used on spacecraft when exposed to the Earth's ra diation belts. The results demonstrate that the NUMIT electric field values in GTO orbits with multiple encounters with the Earth's radiat ion belts are consistent with previous studies of charging in GTO orb its and that potential threat conditions for electrostatic discharge exist on lunar transit trajectories depending on the electrical proper ties of the materials exposed to the radiation environment.
Wind-tunnel based definition of the AFE aerothermodynamic environment. [Aeroassist Flight Experiment
NASA Technical Reports Server (NTRS)
Miller, Charles G.; Wells, W. L.
1992-01-01
The Aeroassist Flight Experiment (AFE), scheduled to be performed in 1994, will serve as a precursor for aeroassisted space transfer vehicles (ASTV's) and is representative of entry concepts being considered for missions to Mars. Rationale for the AFE is reviewed briefly as are the various experiments carried aboard the vehicle. The approach used to determine hypersonic aerodynamic and aerothermodynamic characteristics over a wide range of simulation parameters in ground-based facilities is presented. Facilities, instrumentation and test procedures employed in the establishment of the data base are discussed. Measurements illustrating the effects of hypersonic simulation parameters, particularly normal-shock density ratio (an important parameter for hypersonic blunt bodies), and attitude on aerodynamic and aerothermodynamic characteristics are presented, and predictions from computational fluid dynamic (CFD) computer codes are compared with measurement.
NASA Astrophysics Data System (ADS)
Liu, J.; Wu, S. P.
2017-04-01
Wall function boundary conditions including the effects of compressibility and heat transfer are improved for compressible turbulent boundary flows. Generalized wall function formulation at zero-pressure gradient is proposed based on coupled velocity and temperature profiles in the entire near-wall region. The parameters in the generalized wall function are well revised. The proposed boundary conditions are integrated into Navier-Stokes computational fluid dynamics code that includes the shear stress transport turbulence model. Numerical results are presented for a compressible boundary layer over a flat plate at zero-pressure gradient. Compared with experimental data, the computational results show that the generalized wall function reduces the first grid spacing in the directed normal to the wall and proves the feasibility and effectivity of the generalized wall function method.
User manual for semi-circular compact range reflector code: Version 2
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1987-01-01
A computer code has been developed at the Ohio State University ElectroScience Laboratory to analyze a semi-circular paraboloidal reflector with or without a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the reflector or its individual components at a given distance from the center of the paraboloid. The code computes the fields along a radial, horizontal, vertical or axial cut at that distance. Thus, it is very effective in computing the size of the sweet spot for a semi-circular compact range reflector. This report describes the operation of the code. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
Decomposition of the optical transfer function: wavefront coding imaging systems
NASA Astrophysics Data System (ADS)
Muyo, Gonzalo; Harvey, Andy R.
2005-10-01
We describe the mapping of the optical transfer function (OTF) of an incoherent imaging system into a geometrical representation. We show that for defocused traditional and wavefront-coded systems the OTF can be represented as a generalized Cornu spiral. This representation provides a physical insight into the way in which wavefront coding can increase the depth of field of an imaging system and permits analytical quantification of salient OTF parameters, such as the depth of focus, the location of nulls, and amplitude and phase modulation of the wavefront-coding OTF.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
MULTI2D - a computer code for two-dimensional radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.
2009-06-01
Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are required. Nature of problem: In inertial confinement fusion and related experiments with lasers and particle beams, energy transport by thermal radiation becomes important. Under these conditions, the radiation field strongly interacts with the hydrodynamic motion through emission and absorption processes. Solution method: The equations of radiation transfer coupled with Lagrangian hydrodynamics, heat diffusion and beam tracing (laser or ions) are solved, in two-dimensional axial-symmetric geometry ( R-Z coordinates) using a fractional step scheme. Radiation transfer is solved with angular resolution. Matter properties are either interpolated from tables (equations-of-state and opacities) or computed by user routines (conductivities and beam attenuation). Restrictions: The code has been designed for typical conditions prevailing in inertial confinement fusion (ns time scale, matter states close to local thermodynamical equilibrium, negligible radiation pressure, …). Although a wider range of situations can be treated, extrapolations to regions beyond this design range need special care. Unusual features: A special computer language, called r94, is used at top levels of the code. These parts have to be converted to standard C by a translation program (supplied as part of the package). Due to the complexity of code (hydro-code, grid generation, user interface, graphic post-processor, translator program, installation scripts) extensive manuals are supplied as part of the package. Running time: 567 seconds for the example supplied.
Development of the NASA/FLAGRO computer program for analysis of airframe structures
NASA Technical Reports Server (NTRS)
Forman, R. G.; Shivakumar, V.; Newman, J. C., Jr.
1994-01-01
The NASA/FLAGRO (NASGRO) computer program was developed for fracture control analysis of space hardware and is currently the standard computer code in NASA, the U.S. Air Force, and the European Agency (ESA) for this purpose. The significant attributes of the NASGRO program are the numerous crack case solutions, the large materials file, the improved growth rate equation based on crack closure theory, and the user-friendly promptive input features. In support of the National Aging Aircraft Research Program (NAARP); NASGRO is being further developed to provide advanced state-of-the-art capability for damage tolerance and crack growth analysis of aircraft structural problems, including mechanical systems and engines. The project currently involves a cooperative development effort by NASA, FAA, and ESA. The primary tasks underway are the incorporation of advanced methodology for crack growth rate retardation resulting from spectrum loading and improved analysis for determining crack instability. Also, the current weight function solutions in NASGRO or nonlinear stress gradient problems are being extended to more crack cases, and the 2-d boundary integral routine for stress analysis and stress-intensity factor solutions is being extended to 3-d problems. Lastly, effort is underway to enhance the program to operate on personal computers and work stations in a Windows environment. Because of the increasing and already wide usage of NASGRO, the code offers an excellent mechanism for technology transfer for new fatigue and fracture mechanics capabilities developed within NAARP.
An investigation of tritium transfer in reactor loops
NASA Astrophysics Data System (ADS)
Ilyasova, O. H.; Mosunova, N. A.
2017-09-01
The work is devoted to the important task of the numerical simulation and analysis of the tritium behaviour in the reactor loops. The simulation was carried out by HYDRA-IBRAE/LM code, which is being developed in Nuclear safety institute of the Russian Academy of Sciences. The code is intended for modeling of the liquid metal flow (sodium, lead and lead-bismuth) on the base of non-homogeneous and non-equilibrium two-fluid model. In order to simulate tritium transfer in the code, the special module has been developed. Module includes the models describing the main phenomena of tritium behaviour in reactor loops: transfer, permeation, leakage, etc. Because of shortage of the experimental data, a lot of analytical tests and comparative calculations were considered. Some of them are presented in this work. The comparison of estimation results and experimental and analytical data demonstrate not only qualitative but also good quantitative agreement. It is possible to confirm that HYDRA-IBRAE/LM code allows modeling tritium transfer in reactor loops.
HBOI Underwater Imaging and Communication Research - Phase 1
2012-04-19
validation of one-way pulse stretching radiative transfer code The objective was to develop and validate time-resolved radiative transfer models that...and validation of one-way pulse stretching radiative transfer code The models were subjected to a series of validation experiments over 12.5 meter...about the theoretical basis of the model together with validation results can be found in Dalgleish et al., (20 1 0). Forward scattering Mueller
Comparison of liquid rocket engine base region heat flux computations using three turbulence models
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Griffith, Dwaine O., II; Prendergast, Maurice J.; Seaford, C. M.
1993-01-01
The flow in the base region of launch vehicles is characterized by flow separation, flow reversals, and reattachment. Computation of the convective heat flux in the base region and on the nozzle external surface of Space Shuttle Main Engine and Space Transportation Main Engine (STME) is an important part of defining base region thermal environments. Several turbulence models were incorporated in a CFD code and validated for flow and heat transfer computations in the separated and reattaching regions associated with subsonic and supersonic flows over backward facing steps. Heat flux computations in the base region of a single STME engine and a single S1C engine were performed using three different wall functions as well as a renormalization-group based k-epsilon model. With the very limited data available, the computed values are seen to be of the right order of magnitude. Based on the validation comparisons, it is concluded that all the turbulence models studied have predicted the reattachment location and the velocity profiles at various axial stations downstream of the step very well.
Two way time transfer results at NRL and USNO
NASA Technical Reports Server (NTRS)
Galysh, Ivan J.; Landis, G. Paul
1993-01-01
The Naval Research Laboratory (NRL) has developed a two way time transfer modem system for the United States Naval Observatory (USNO). Two modems in conjunction with a pair of Very Small Aperture Terminal (VSAT) and a communication satellite can achieve sub nanosecond time transfer. This performance is demonstrated by the results of testing at and between NRL and USNO. The modems use Code Division Multiple Access (CDMA) methods to separate their signals through a single path in the satellite. Each modem transmitted a different Pseudo Random Noise (PRN) code and received the others PRN code. High precision time transfer is possible with two way methods because of reciprocity of many of the terms of the path and hardware delay between the two modems. The hardware description was given in a previous paper.
SHAPEMOL: Modelling molecular line emission in protoplanetary and planetary nebulae with SHAPE
NASA Astrophysics Data System (ADS)
Santander-García, M.; Bujarrabal, V.; Steffen, W.; Koning, N.
2014-04-01
Modern instrumentation in radioastronomy constitutes a valuable tool for studying the Universe: ALMA will reach unprecedented sensitivities and spatial resolution, while Herschel/HIFI has opened a new window for probing molecular warm gas (˜50-1000 K). On the other hand, the SHAPE software has emerged in the last few years as the standard tool for determining the morphology and velocity field of different kinds of gaseous emission nebulae via spatio-kinematical modelling. Standard SHAPE implements radiative transfer solving, but it is only available for atomic species and not for molecules. Being aware of the growing importance of the development of tools for easying the analyses of molecular data from new era observatories, we introduce the computer code shapemol, a plug-in for SHAPE v5.0 with which we intend to fill the so far empty molecular niche. Shapemol enables spatio-kinematic modeling with accurate non-LTE calculations of line excitation and radiative transfer in molecular species. This code has been succesfully tested in the study of the excitation conditions of the molecular envelope of the young planetary nebula NGC 7027 using data from Herschel/HIFI and IRAM 30m. Currently, it allows radiative transfer solving in the 12CO and 13CO J=1-0 to J=17-16 lines. Shapemol, used along SHAPE, allows to easily generate synthetic maps to test against interferometric observations, as well as synthetic line profiles to match single-dish observations.
Enhancements to the SSME transfer function modeling code
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.
1995-01-01
This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an effort to filter out high frequency characteristics. The fourth method removes the presumed system excitation and its harmonics in order to investigate the effects of the excitation on the modeling process. The fifth method is an attempt to apply constrained RID to obtain better transfer functions through more accurate modeling over certain frequency ranges. Section 4 presents some new C main files which were created to round out the functionality of the existing SSME transfer function modeling code. It is now possible to go from time data to transfer function models using only the C codes; it is not necessary to rely on external software. The new C main files and instructions for their use are included. Section 5 presents current and future enhancements to the XPLOT graphics program which was delivered with the initial software. Several new features which have been added to the program are detailed in the first part of this section. The remainder of Section 5 then lists some possible features which may be added in the future. Section 6 contains the conclusion section of this report. Section 6.1 is an overview of the work including a summary and observations relating to finding transfer functions with the SSME code. Section 6.2 contains information relating to future work on the project.
User's manual for semi-circular compact range reflector code
NASA Technical Reports Server (NTRS)
Gupta, Inder J.; Burnside, Walter D.
1986-01-01
A computer code was developed to analyze a semi-circular paraboloidal reflector antenna with a rolled edge at the top and a skirt at the bottom. The code can be used to compute the total near field of the antenna or its individual components at a given distance from the center of the paraboloid. Thus, it is very effective in computing the size of the sweet spot for RCS or antenna measurement. The operation of the code is described. Various input and output statements are explained. Some results obtained using the computer code are presented to illustrate the code's capability as well as being samples of input/output sets.
Highly fault-tolerant parallel computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spielman, D.A.
We re-introduce the coded model of fault-tolerant computation in which the input and output of a computational device are treated as words in an error-correcting code. A computational device correctly computes a function in the coded model if its input and output, once decoded, are a valid input and output of the function. In the coded model, it is reasonable to hope to simulate all computational devices by devices whose size is greater by a constant factor but which are exponentially reliable even if each of their components can fail with some constant probability. We consider fine-grained parallel computations inmore » which each processor has a constant probability of producing the wrong output at each time step. We show that any parallel computation that runs for time t on w processors can be performed reliably on a faulty machine in the coded model using w log{sup O(l)} w processors and time t log{sup O(l)} w. The failure probability of the computation will be at most t {center_dot} exp(-w{sup 1/4}). The codes used to communicate with our fault-tolerant machines are generalized Reed-Solomon codes and can thus be encoded and decoded in O(n log{sup O(1)} n) sequential time and are independent of the machine they are used to communicate with. We also show how coded computation can be used to self-correct many linear functions in parallel with arbitrarily small overhead.« less
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
Consequence analysis in LPG installation using an integrated computer package.
Ditali, S; Colombi, M; Moreschini, G; Senni, S
2000-01-07
This paper presents the prototype of the computer code, Atlantide, developed to assess the consequences associated with accidental events that can occur in a LPG storage plant. The characteristic of Atlantide is to be simple enough but at the same time adequate to cope with consequence analysis as required by Italian legislation in fulfilling the Seveso Directive. The application of Atlantide is appropriate for LPG storage/transferring installations. The models and correlations implemented in the code are relevant to flashing liquid releases, heavy gas dispersion and other typical phenomena such as BLEVE/Fireball. The computer code allows, on the basis of the operating/design characteristics, the study of the relevant accidental events from the evaluation of the release rate (liquid, gaseous and two-phase) in the unit involved, to the analysis of the subsequent evaporation and dispersion, up to the assessment of the final phenomena of fire and explosion. This is done taking as reference simplified Event Trees which describe the evolution of accidental scenarios, taking into account the most likely meteorological conditions, the different release situations and other features typical of a LPG installation. The limited input data required and the automatic linking between the single models, that are activated in a defined sequence, depending on the accidental event selected, minimize both the time required for the risk analysis and the possibility of errors. Models and equations implemented in Atlantide have been selected from public literature or in-house developed software and tailored with the aim to be easy to use and fast to run but, nevertheless, able to provide realistic simulation of the accidental event as well as reliable results, in terms of physical effects and hazardous areas. The results have been compared with those of other internationally recognized codes and with the criteria adopted by Italian authorities to verify the Safety Reports for LPG installations. A brief of the theoretical basis of each model implemented in Atlantide and an example of application are included in the paper.
Heat Transfer by Thermo-Capillary Convection. Sounding Rocket COMPERE Experiment SOURCE
NASA Astrophysics Data System (ADS)
Fuhrmann, Eckart; Dreyer, Michael
2009-08-01
This paper describes the results of a sounding rocket experiment which was partly dedicated to study the heat transfer from a hot wall to a cold liquid with a free surface. Natural or buoyancy-driven convection does not occur in the compensated gravity environment of a ballistic phase. Thermo-capillary convection driven by a temperature gradient along the free surface always occurs if a non-condensable gas is present. This convection increases the heat transfer compared to a pure conductive case. Heat transfer correlations are needed to predict temperature distributions in the tanks of cryogenic upper stages. Future upper stages of the European Ariane V rocket have mission scenarios with multiple ballistic phases. The aims of this paper and of the COMPERE group (French-German research group on propellant behavior in rocket tanks) in general are to provide basic knowledge, correlations and computer models to predict the thermo-fluid behavior of cryogenic propellants for future mission scenarios. Temperature and surface location data from the flight have been compared with numerical calculations to get the heat flux from the wall to the liquid. Since the heat flux measurements along the walls of the transparent test cell were not possible, the analysis of the heat transfer coefficient relies therefore on the numerical modeling which was validated with the flight data. The coincidence between experiment and simulation is fairly good and allows presenting the data in form of a Nusselt number which depends on a characteristic Reynolds number and the Prandtl number. The results are useful for further benchmarking of Computational Fluid Dynamics (CFD) codes such as FLOW-3D and FLUENT, and for the design of future upper stage propellant tanks.
Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step
NASA Astrophysics Data System (ADS)
Jayakumar, J. S.; Kumar, Inder; Eswaran, V.
2010-12-01
Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †
Murdani, Muhammad Harist; Hong, Bonghee
2018-01-01
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc) and neighborhood proximity (Top-K). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space. PMID:29587366
Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.
Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee
2018-03-24
In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.
Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains.
Wang, Zhuo; Sornborger, Andrew T; Tao, Louis
2016-06-01
Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Recently experimental evidence has been presented suggesting that neural information is encoded and transferred in packets, i.e., in stereotypical, correlated spiking patterns of neural activity. Due to their relevance to coherent spiking, synfire chains are one of the main theoretical constructs that have been appealed to in order to describe coherent spiking and information transfer phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited the classical synfire chain's ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by streaming input, processes the input, then makes a decision based on the processed information and shuts itself down.
MODTRAN6: a major upgrade of the MODTRAN radiative transfer code
NASA Astrophysics Data System (ADS)
Berk, Alexander; Conforti, Patrick; Kennett, Rosemary; Perkins, Timothy; Hawes, Frederick; van den Bosch, Jeannette
2014-06-01
The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. This version of the code incorporates modern software ar- chitecture including an application programming interface, enhanced physics features including a line-by-line algorithm, a supplementary physics toolkit, and new documentation. The application programming interface has been developed for ease of integration into user applications. The MODTRAN code has been restructured towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.
NASA Technical Reports Server (NTRS)
1994-01-01
General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.
Sensitivity study of the monogroove with screen heat pipe design
NASA Technical Reports Server (NTRS)
Evans, Austin L.; Joyce, Martin
1988-01-01
The present sensitivity study of design variable effects on the performance of a monogroove-with-screen heat pipe obtains performance curves for maximum heat-transfer rates vs. operating temperatures by means of a computer code; performance projections for both 1-g and zero-g conditions are obtainable. The variables in question were liquid and vapor channel design, wall groove design, and the number of feed lines in the evaporator and condenser. The effect on performance of three different working fluids, namely ammonia, methanol, and water, were also determined. Greatest sensitivity was to changes in liquid and vapor channel diameters.
Bond order potential module for LAMMPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-11
pair_bop is a module for performing energy calculations using the Bond Order Potential (BOP) for use in the parallel molecular dynamics code LAMMPS. The bop pair style computes BOP based upon quantum mechanical incorporating both sigma and pi bondings. By analytically deriving the BOP pair bop from quantum mechanical theory its transferability to different phases can approach that of quantum mechanical methods. This potential is extremely effective at modeling 111-V and II-VI compounds such as GaAs and CdTe. This potential is similar to the original BOP developed by Pettifor and later updated by Murdock et al. and Ward et al.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
"Hour of Code": Can It Change Students' Attitudes toward Programming?
ERIC Educational Resources Information Center
Du, Jie; Wimmer, Hayden; Rada, Roy
2016-01-01
The Hour of Code is a one-hour introduction to computer science organized by Code.org, a non-profit dedicated to expanding participation in computer science. This study investigated the impact of the Hour of Code on students' attitudes towards computer programming and their knowledge of programming. A sample of undergraduate students from two…
NASA Technical Reports Server (NTRS)
Thompson, E.
1979-01-01
A finite element computer code for the analysis of mantle convection is described. The coupled equations for creeping viscous flow and heat transfer can be solved for either a transient analysis or steady-state analysis. For transient analyses, either a control volume or a control mass approach can be used. Non-Newtonian fluids with viscosities which have thermal and spacial dependencies can be easily incorporated. All material parameters may be written as function statements by the user or simply specified as constants. A wide range of boundary conditions, both for the thermal analysis and the viscous flow analysis can be specified. For steady-state analyses, elastic strain rates can be included. Although this manual was specifically written for users interested in mantle convection, the code is equally well suited for analysis in a number of other areas including metal forming, glacial flows, and creep of rock and soil.
Solar Ellerman Bombs in 1D Radiative Hydrodynamics
NASA Astrophysics Data System (ADS)
Reid, A.; Mathioudakis, M.; Kowalski, A.; Doyle, J. G.; Allred, J. C.
2017-02-01
Recent observations from the Interface Region Imaging Spectrograph appear to show impulsive brightenings in high temperature lines, which when combined with simultaneous ground-based observations in Hα, appear co-spatial to Ellerman Bombs (EBs). We use the RADYN one-dimensional radiative transfer code in an attempt to try and reproduce the observed line profiles and simulate the atmospheric conditions of these events. Combined with the MULTI/RH line synthesis codes, we compute the Hα, Ca II 8542 Å, and Mg II h and k lines for these simulated events and compare them to previous observations. Our findings hint that the presence of superheated regions in the photosphere (>10,000 K) is not a plausible explanation for the production of EB signatures. While we are able to recreate EB-like line profiles in Hα, Ca II 8542 Å, and Mg II h and k, we cannot achieve agreement with all of these simultaneously.
Strategies for the coupling of global and local crystal growth models
NASA Astrophysics Data System (ADS)
Derby, Jeffrey J.; Lun, Lisa; Yeckel, Andrew
2007-05-01
The modular coupling of existing numerical codes to model crystal growth processes will provide for maximum effectiveness, capability, and flexibility. However, significant challenges are posed to make these coupled models mathematically self-consistent and algorithmically robust. This paper presents sample results from a coupling of the CrysVUn code, used here to compute furnace-scale heat transfer, and Cats2D, used to calculate melt fluid dynamics and phase-change phenomena, to form a global model for a Bridgman crystal growth system. However, the strategy used to implement the CrysVUn-Cats2D coupling is unreliable and inefficient. The implementation of under-relaxation within a block Gauss-Seidel iteration is shown to be ineffective for improving the coupling performance in a model one-dimensional problem representative of a melt crystal growth model. Ideas to overcome current convergence limitations using approximations to a full Newton iteration method are discussed.
Simulation Studies of Mechanical Properties of Novel Silica Nano-structures
NASA Astrophysics Data System (ADS)
Muralidharan, Krishna; Torras Costa, Joan; Trickey, Samuel B.
2006-03-01
Advances in nanotechnology and the importance of silica as a technological material continue to stimulate computational study of the properties of possible novel silica nanostructures. Thus we have done classical molecular dynamics (MD) and multi-scale quantum mechanical (QM/MD) simulation studies of the mechanical properties of single-wall and multi-wall silica nano-rods of varying dimensions. Such nano-rods have been predicted by Mallik et al. to be unusually strong in tensile failure. Here we compare failure mechanisms of such nano-rods under tension, compression, and bending. The concurrent multi-scale QM/MD studies use the general PUPIL system (Torras et al.). In this case, PUPIL provides automated interoperation of the MNDO Transfer Hamiltonian QM code (Taylor et al.) and a locally written MD code. Embedding of the QM-forces domain is via the scheme of Mallik et al. Work supported by NSF ITR award DMR-0325553.
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
40 CFR 80.171 - Product transfer documents (PTDs).
Code of Federal Regulations, 2010 CFR
2010-07-01
... being transferred is exempt base gasoline to be used for research, development, or test purposes only, the following warning must also be stated on the PTD: “For use in research, development, and test... codes and other non-regulatory language. (1) Product codes and other non-regulatory language may not be...