Sample records for design code predictions

  1. Off-design computer code for calculating the aerodynamic performance of axial-flow fans and compressors

    NASA Technical Reports Server (NTRS)

    Schmidt, James F.

    1995-01-01

    An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.

  2. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  3. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    PubMed

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  4. Development of Tripropellant CFD Design Code

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.

    1998-01-01

    A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.

  5. The Design and Implementation of a Read Prediction Buffer

    DTIC Science & Technology

    1992-12-01

    City, State, and ZIP Code) 7b ADDRESS (City, State. and ZIP Code) 8a. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT... 9 E. THESIS STRUCTURE.. . .... ............... 9 II. READ PREDICTION ALGORITHM AND BUFFER DESIGN 10 A. THE READ PREDICTION ALGORITHM...29 Figure 9 . Basic Multiplexer Cell .... .......... .. 30 Figure 10. Block Diagram Simulation Labels ......... 38 viii I. INTRODUCTION A

  6. Centrifugal and Axial Pump Design and Off-Design Performance Prediction

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1995-01-01

    A meanline pump-flow modeling method has been developed to provide a fast capability for modeling pumps of cryogenic rocket engines. Based on this method, a meanline pump-flow code PUMPA was written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The design-point rotor efficiency and slip factors are obtained from empirical correlations to rotor-specific speed and geometry. The pump code can model axial, inducer, mixed-flow, and centrifugal pumps and can model multistage pumps in series. The rapid input setup and computer run time for this meanline pump flow code make it an effective analysis and conceptual design tool. The map-generation capabilities of the code provide the information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of PUMPA permit the user to do parametric design space exploration of candidate pump configurations and to provide head-flow maps for engine system evaluation.

  7. Numerical predictions of EML (electromagnetic launcher) system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.

    1987-01-01

    The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for themore » rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.« less

  8. NASA Lewis Stirling engine computer code evaluation

    NASA Technical Reports Server (NTRS)

    Sullivan, Timothy J.

    1989-01-01

    In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.

  9. Numerical simulation of experiments in the Giant Planet Facility

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Davy, W. C.

    1979-01-01

    Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.

  10. Predicting the Performance of an Axial-Flow Compressor

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1986-01-01

    Stage-stacking computer code (STGSTK) developed for predicting off-design performance of multi-stage axial-flow compressors. Code uses meanline stagestacking method. Stage and cumulative compressor performance calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. Numerous options available within code. Code developed so user modify correlations to suit their needs.

  11. The power induced effects module: A FORTRAN code which estimates lift increments due to power induced effects for V/STOL flight

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Howard, Kipp E.

    1991-01-01

    A user friendly FORTRAN code that can be used for preliminary design of V/STOL aircraft is described. The program estimates lift increments, due to power induced effects, encountered by aircraft in V/STOL flight. These lift increments are calculated using empirical relations developed from wind tunnel tests and are due to suckdown, fountain, ground vortex, jet wake, and the reaction control system. The code can be used as a preliminary design tool along with NASA Ames' Aircraft Synthesis design code or as a stand-alone program for V/STOL aircraft designers. The Power Induced Effects (PIE) module was validated using experimental data and data computed from lift increment routines. Results are presented for many flat plate models along with the McDonnell Aircraft Company's MFVT (mixed flow vectored thrust) V/STOL preliminary design and a 15 percent scale model of the YAV-8B Harrier V/STOL aircraft. Trends and magnitudes of lift increments versus aircraft height above the ground were predicted well by the PIE module. The code also provided good predictions of the magnitudes of lift increments versus aircraft forward velocity. More experimental results are needed to determine how well the code predicts lift increments as they vary with jet deflection angle and angle of attack. The FORTRAN code is provided in the appendix.

  12. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  13. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  14. Application of a Two-dimensional Unsteady Viscous Analysis Code to a Supersonic Throughflow Fan Stage

    NASA Technical Reports Server (NTRS)

    Steinke, Ronald J.

    1989-01-01

    The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.

  15. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  16. TFaNS Tone Fan Noise Design/Prediction System. Volume 2; User's Manual; 1.4

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Eversman, Walter

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. CUP3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides information on code input and file structure essential for potential users of TFANS. This report is divided into three volumes: Volume 1. System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume 2. User's Manual, TFANS Vers. 1.4; Volume 3. Evaluation of System Codes.

  17. TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.

  18. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  19. Statistical Analysis of CFD Solutions from 2nd Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Hemsch, M. J.; Morrison, J. H.

    2004-01-01

    In June 2001, the first AIAA Drag Prediction Workshop was held to evaluate results obtained from extensive N-Version testing of a series of RANS CFD codes. The geometry used for the computations was the DLR-F4 wing-body combination which resembles a medium-range subsonic transport. The cases reported include the design cruise point, drag polars at eight Mach numbers, and drag rise at three values of lift. Although comparisons of the code-to-code medians with available experimental data were similar to those obtained in previous studies, the code-to-code scatter was more than an order-of-magnitude larger than expected and far larger than desired for design and for experimental validation. The second Drag Prediction Workshop was held in June 2003 with emphasis on the determination of installed pylon-nacelle drag increments and on grid refinement studies. The geometry used was the DLR-F6 wing-body-pylon-nacelle combination for which the design cruise point and the cases run were similar to the first workshop except for additional runs on coarse and fine grids to complement the runs on medium grids. The code-to-code scatter was significantly reduced for the wing-body configuration compared to the first workshop, although still much larger than desired. However, the grid refinement studies showed no sign$cant improvement in code-to-code scatter with increasing grid refinement.

  20. Application of artificial neural networks to the design optimization of aerospace structural components

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  1. Optimum Design of Aerospace Structural Components Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Berke, L.; Patnaik, S. N.; Murthy, P. L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  2. STGSTK: A computer code for predicting multistage axial flow compressor performance by a meanline stage stacking method

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1982-01-01

    A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.

  3. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  4. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  5. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  6. The design of an adaptive predictive coder using a single-chip digital signal processor

    NASA Astrophysics Data System (ADS)

    Randolph, M. A.

    1985-01-01

    A speech coding processor architecture design study has been performed in which Texas Instruments TMS32010 has been selected from among three commercially available digital signal processing integrated circuits and evaluated in an implementation study of real-time Adaptive Predictive Coding (APC). The TMS32010 has been compared with AR&T Bell Laboratories DSP I and Nippon Electric Co. PD7720 and was found to be most suitable for a single chip implementation of APC. A preliminary design system based on TMS32010 has been performed, and several of the hardware and software design issues are discussed. Particular attention was paid to the design of an external memory controller which permits rapid sequential access of external RAM. As a result, it has been determined that a compact hardware implementation of the APC algorithm is feasible based of the TSM32010. Originator-supplied keywords include: vocoders, speech compression, adaptive predictive coding, digital signal processing microcomputers, speech processor architectures, and special purpose processor.

  7. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  8. TFaNS-Tone Fan Noise Design/Prediction System: Users' Manual TFaNS Version 1.5

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Huff, Dennis L. (Technical Monitor)

    2003-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Glenn. The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. The first version of this design system was developed under a previous NASA contract. Several improvements have been made to TFaNS. This users' manual shows how to run this new system. TFaNS consists of the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and writes them to files, CUP3D Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions, and AWAKEN CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so they can be used by the system. This report provides information on code input and file structure essential for potential users of TFaNS.

  9. Progress Toward Efficient Laminar Flow Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Campbell, Matthew L.; Streit, Thomas

    2011-01-01

    A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center. The overall approach consists of the loose coupling of a flow solver, a transition prediction method and a design module using shell scripts, along with interface modules to prepare the input for each method. This approach allows the user to select the flow solver and transition prediction module, as well as run mode for each code, based on the fidelity most compatible with the problem and available resources. The design module can be any method that designs to a specified target pressure distribution. In addition to the interface modules, two new components have been developed: 1) an efficient, empirical transition prediction module (MATTC) that provides n-factor growth distributions without requiring boundary layer information; and 2) an automated target pressure generation code (ATPG) that develops a target pressure distribution that meets a variety of flow and geometry constraints. The ATPG code also includes empirical estimates of several drag components to allow the optimization of the target pressure distribution. The current system has been developed for the design of subsonic and transonic airfoils and wings, but may be extendable to other speed ranges and components. Several analysis and design examples are included to demonstrate the current capabilities of the system.

  10. Context influences on TALE–DNA binding revealed by quantitative profiling

    PubMed Central

    Rogers, Julia M.; Barrera, Luis A.; Reyon, Deepak; Sander, Jeffry D.; Kellis, Manolis; Joung, J Keith; Bulyk, Martha L.

    2015-01-01

    Transcription activator-like effector (TALE) proteins recognize DNA using a seemingly simple DNA-binding code, which makes them attractive for use in genome engineering technologies that require precise targeting. Although this code is used successfully to design TALEs to target specific sequences, off-target binding has been observed and is difficult to predict. Here we explore TALE–DNA interactions comprehensively by quantitatively assaying the DNA-binding specificities of 21 representative TALEs to ∼5,000–20,000 unique DNA sequences per protein using custom-designed protein-binding microarrays (PBMs). We find that protein context features exert significant influences on binding. Thus, the canonical recognition code does not fully capture the complexity of TALE–DNA binding. We used the PBM data to develop a computational model, Specificity Inference For TAL-Effector Design (SIFTED), to predict the DNA-binding specificity of any TALE. We provide SIFTED as a publicly available web tool that predicts potential genomic off-target sites for improved TALE design. PMID:26067805

  11. Context influences on TALE-DNA binding revealed by quantitative profiling.

    PubMed

    Rogers, Julia M; Barrera, Luis A; Reyon, Deepak; Sander, Jeffry D; Kellis, Manolis; Joung, J Keith; Bulyk, Martha L

    2015-06-11

    Transcription activator-like effector (TALE) proteins recognize DNA using a seemingly simple DNA-binding code, which makes them attractive for use in genome engineering technologies that require precise targeting. Although this code is used successfully to design TALEs to target specific sequences, off-target binding has been observed and is difficult to predict. Here we explore TALE-DNA interactions comprehensively by quantitatively assaying the DNA-binding specificities of 21 representative TALEs to ∼5,000-20,000 unique DNA sequences per protein using custom-designed protein-binding microarrays (PBMs). We find that protein context features exert significant influences on binding. Thus, the canonical recognition code does not fully capture the complexity of TALE-DNA binding. We used the PBM data to develop a computational model, Specificity Inference For TAL-Effector Design (SIFTED), to predict the DNA-binding specificity of any TALE. We provide SIFTED as a publicly available web tool that predicts potential genomic off-target sites for improved TALE design.

  12. VLSI design of lossless frame recompression using multi-orientation prediction

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; You, Yi-Lun; Chen, Yi-Guo

    2016-01-01

    Pursuing an experience of high-end visual quality drives human to demand a higher display resolution and a higher frame rate. Hence, a lot of powerful coding tools are aggregated together in emerging video coding standards to improve coding efficiency. This also makes video coding standards suffer from two design challenges: heavy computation and tremendous memory bandwidth. The first issue can be properly solved by a careful hardware architecture design with advanced semiconductor processes. Nevertheless, the second one becomes a critical design bottleneck for a modern video coding system. In this article, a lossless frame recompression using multi-orientation prediction technique is proposed to overcome this bottleneck. This work is realised into a silicon chip with the technology of TSMC 0.18 µm CMOS process. Its encoding capability can reach full-HD (1920 × 1080)@48 fps. The chip power consumption is 17.31 mW@100 MHz. Core area and chip area are 0.83 × 0.83 mm2 and 1.20 × 1.20 mm2, respectively. Experiment results demonstrate that this work exhibits an outstanding performance on lossless compression ratio with a competitive hardware performance.

  13. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  14. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  15. Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.

  16. Enhanced capabilities and modified users manual for axial-flow compressor conceptual design code CSPAN

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.; Lavelle, Thomas M.

    1995-01-01

    Modifications made to the axial-flow compressor conceptual design code CSPAN are documented in this report. Endwall blockage and stall margin predictions were added. The loss-coefficient model was upgraded. Default correlations for rotor and stator solidity and aspect-ratio inputs and for stator-exit tangential velocity inputs were included in the code along with defaults for aerodynamic design limits. A complete description of input and output along with sample cases are included.

  17. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  18. Prediction of effects of wing contour modifications on low-speed maximum lift and transonic performance for the EA-6B aircraft

    NASA Technical Reports Server (NTRS)

    Allison, Dennis O.; Waggoner, E. G.

    1990-01-01

    Computational predictions of the effects of wing contour modifications on maximum lift and transonic performance were made and verified against low speed and transonic wind tunnel data. This effort was part of a program to improve the maneuvering capability of the EA-6B electronics countermeasures aircraft, which evolved from the A-6 attack aircraft. The predictions were based on results from three computer codes which all include viscous effects: MCARF, a 2-D subsonic panel code; TAWFIVE, a transonic full potential code; and WBPPW, a transonic small disturbance potential flow code. The modifications were previously designed with the aid of these and other codes. The wing modifications consists of contour changes to the leading edge slats and trailing edge flaps and were designed for increased maximum lift with minimum effect on transonic performance. The prediction of the effects of the modifications are presented, with emphasis on verification through comparisons with wind tunnel data from the National Transonic Facility. Attention is focused on increments in low speed maximum lift and increments in transonic lift, pitching moment, and drag resulting from the contour modifications.

  19. Statistical Analysis of the AIAA Drag Prediction Workshop CFD Solutions

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop (DPW), held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third AIAA Drag Prediction Workshop, held in June 2006, focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This report compares the transonic cruise prediction results of the second and third workshops using statistical analysis.

  20. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  1. Interactive-graphic flowpath plotting for turbine engines

    NASA Technical Reports Server (NTRS)

    Corban, R. R.

    1981-01-01

    An engine cycle program capable of simulating the design and off-design performance of arbitrary turbine engines, and a computer code which, when used in conjunction with the cycle code, can predict the weight of the engines are described. A graphics subroutine was added to the code to enable the engineer to visualize the designed engine with more clarity by producing an overall view of the designed engine for output on a graphics device using IBM-370 graphics subroutines. In addition, with the engine drawn on a graphics screen, the program allows for the interactive user to make changes to the inputs to the code for the engine to be redrawn and reweighed. These improvements allow better use of the code in conjunction with the engine program.

  2. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  3. Aeroacoustic Codes For Rotor Harmonic and BVI Noise--CAMRAD.Mod1/HIRES

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Boyd, D. Douglas, Jr.; Burley, Casey L.; Jolly, J. Ralph, Jr.

    1996-01-01

    This paper presents a status of non-CFD aeroacoustic codes at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. The prediction approach incorporates three primary components: CAMRAD.Mod1 - a substantially modified version of the performance/trim/wake code CAMRAD; HIRES - a high resolution blade loads post-processor; and WOPWOP - an acoustic code. The functional capabilities and physical modeling in CAMRAD.Mod1/HIRES will be summarized and illustrated. A new multi-core roll-up wake modeling approach is introduced and validated. Predictions of rotor wake and radiated noise are compared with to the results of the HART program, a model BO-105 windtunnel test at the DNW in Europe. Additional comparisons are made to results from a DNW test of a contemporary design four-bladed rotor, as well as from a Langley test of a single proprotor (tiltrotor) three-bladed model configuration. Because the method is shown to help eliminate the necessity of guesswork in setting code parameters between different rotor configurations, it should prove useful as a rotor noise design tool.

  4. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

  5. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  6. Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.

    2007-01-01

    Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.

  7. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  8. Euler Technology Assessment program for preliminary aircraft design employing SPLITFLOW code with Cartesian unstructured grid method

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.

    1995-01-01

    This report documents results from the Euler Technology Assessment program. The objective was to evaluate the efficacy of Euler computational fluid dynamics (CFD) codes for use in preliminary aircraft design. Both the accuracy of the predictions and the rapidity of calculations were to be assessed. This portion of the study was conducted by Lockheed Fort Worth Company, using a recently developed in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages for this study, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaptation of the volume grid during the solution convergence to resolve high-gradient flow regions. This proved beneficial in resolving the large vortical structures in the flow for several configurations examined in the present study. The SPLITFLOW code predictions of the configuration forces and moments are shown to be adequate for preliminary design analysis, including predictions of sideslip effects and the effects of geometry variations at low and high angles of attack. The time required to generate the results from initial surface definition is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  9. The Prediction of Performance in Navy Signalman Class "A" School. TAEG Report No. 90.

    ERIC Educational Resources Information Center

    Mew, Dorothy V.

    A study designed to develop a selection model for the prediction of Signalman performance in sending and receiving Morse code and to evaluate training strategies was conducted with 180 Navy and Coast Guard enlisted men. Trainees were taught to send Morse code using innovative training materials (mnemonics and guided practice). High and average…

  10. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2011-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  11. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2012-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  12. Computational Predictions of the Performance Wright 'Bent End' Propellers

    NASA Technical Reports Server (NTRS)

    Wang, Xiang-Yu; Ash, Robert L.; Bobbitt, Percy J.; Prior, Edwin (Technical Monitor)

    2002-01-01

    Computational analysis of two 1911 Wright brothers 'Bent End' wooden propeller reproductions have been performed and compared with experimental test results from the Langley Full Scale Wind Tunnel. The purpose of the analysis was to check the consistency of the experimental results and to validate the reliability of the tests. This report is one part of the project on the propeller performance research of the Wright 'Bent End' propellers, intend to document the Wright brothers' pioneering propeller design contributions. Two computer codes were used in the computational predictions. The FLO-MG Navier-Stokes code is a CFD (Computational Fluid Dynamics) code based on the Navier-Stokes Equations. It is mainly used to compute the lift coefficient and the drag coefficient at specified angles of attack at different radii. Those calculated data are the intermediate results of the computation and a part of the necessary input for the Propeller Design Analysis Code (based on Adkins and Libeck method), which is a propeller design code used to compute the propeller thrust coefficient, the propeller power coefficient and the propeller propulsive efficiency.

  13. Characterizing the Properties of a Woven SiC/SiC Composite Using W-CEMCAN Computer Code

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.

    1999-01-01

    A micromechanics based computer code to predict the thermal and mechanical properties of woven ceramic matrix composites (CMC) is developed. This computer code, W-CEMCAN (Woven CEramic Matrix Composites ANalyzer), predicts the properties of two-dimensional woven CMC at any temperature and takes into account various constituent geometries and volume fractions. This computer code is used to predict the thermal and mechanical properties of an advanced CMC composed of 0/90 five-harness (5 HS) Sylramic fiber which had been chemically vapor infiltrated (CVI) with boron nitride (BN) and SiC interphase coatings and melt-infiltrated (MI) with SiC. The predictions, based on the bulk constituent properties from the literature, are compared with measured experimental data. Based on the comparison. improved or calibrated properties for the constituent materials are then developed for use by material developers/designers. The computer code is then used to predict the properties of a composite with the same constituents but with different fiber volume fractions. The predictions are compared with measured data and a good agreement is achieved.

  14. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  15. An Integrated Magnetic Circuit Model and Finite Element Model Approach to Magnetic Bearing Design

    NASA Technical Reports Server (NTRS)

    Provenza, Andrew J.; Kenny, Andrew; Palazzolo, Alan B.

    2003-01-01

    A code for designing magnetic bearings is described. The code generates curves from magnetic circuit equations relating important bearing performance parameters. Bearing parameters selected from the curves by a designer to meet the requirements of a particular application are input directly by the code into a three-dimensional finite element analysis preprocessor. This means that a three-dimensional computer model of the bearing being developed is immediately available for viewing. The finite element model solution can be used to show areas of magnetic saturation and make more accurate predictions of the bearing load capacity, current stiffness, position stiffness, and inductance than the magnetic circuit equations did at the start of the design process. In summary, the code combines one-dimensional and three-dimensional modeling methods for designing magnetic bearings.

  16. COMSAC: Computational Methods for Stability and Control. Part 2

    NASA Technical Reports Server (NTRS)

    Fremaux, C. Michael (Compiler); Hall, Robert M. (Compiler)

    2004-01-01

    The unprecedented advances being made in computational fluid dynamic (CFD) technology have demonstrated the powerful capabilities of codes in applications to civil and military aircraft. Used in conjunction with wind-tunnel and flight investigations, many codes are now routinely used by designers in diverse applications such as aerodynamic performance predictions and propulsion integration. Typically, these codes are most reliable for attached, steady, and predominantly turbulent flows. As a result of increasing reliability and confidence in CFD, wind-tunnel testing for some new configurations has been substantially reduced in key areas, such as wing trade studies for mission performance guarantees. Interest is now growing in the application of computational methods to other critical design challenges. One of the most important disciplinary elements for civil and military aircraft is prediction of stability and control characteristics. CFD offers the potential for significantly increasing the basic understanding, prediction, and control of flow phenomena associated with requirements for satisfactory aircraft handling characteristics.

  17. Radiation from advanced solid rocket motor plumes

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Smith, Sheldon D.; Myruski, Brian L.

    1994-01-01

    The overall objective of this study was to develop an understanding of solid rocket motor (SRM) plumes in sufficient detail to accurately explain the majority of plume radiation test data. Improved flowfield and radiation analysis codes were developed to accurately and efficiently account for all the factors which effect radiation heating from rocket plumes. These codes were verified by comparing predicted plume behavior with measured NASA/MSFC ASRM test data. Upon conducting a thorough review of the current state-of-the-art of SRM plume flowfield and radiation prediction methodology and the pertinent data base, the following analyses were developed for future design use. The NOZZRAD code was developed for preliminary base heating design and Al2O3 particle optical property data evaluation using a generalized two-flux solution to the radiative transfer equation. The IDARAD code was developed for rapid evaluation of plume radiation effects using the spherical harmonics method of differential approximation to the radiative transfer equation. The FDNS CFD code with fully coupled Euler-Lagrange particle tracking was validated by comparison to predictions made with the industry standard RAMP code for SRM nozzle flowfield analysis. The FDNS code provides the ability to analyze not only rocket nozzle flow, but also axisymmetric and three-dimensional plume flowfields with state-of-the-art CFD methodology. Procedures for conducting meaningful thermo-vision camera studies were developed.

  18. 77 FR 37091 - Agency Information Collection Activities: Request for Comments for a New Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-20

    ... analysis and design, and computer software design and coding. Given the fact that over $500 million were... acoustic algorithms, computer architecture, and source code that dated to the 1970s. Since that time... 2012. Version 3.0 is an entirely new, state-of-the-art computer program used for predicting noise...

  19. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  20. Horizontal axis wind turbine post stall airfoil characteristics synthesization

    NASA Technical Reports Server (NTRS)

    Tangler, James L.; Ostowari, Cyrus

    1995-01-01

    Blade-element/momentum performance prediction codes are routinely used for wind turbine design and analysis. A weakness of these codes is their inability to consistently predict peak power upon which the machine structural design and cost are strongly dependent. The purpose of this study was to compare post-stall airfoil characteristics synthesization theory to a systematically acquired wind tunnel data set in which the effects of aspect ratio, airfoil thickness, and Reynolds number were investigated. The results of this comparison identified discrepancies between current theory and the wind tunnel data which could not be resolved. Other factors not previously investigated may account for these discrepancies and have a significant effect on peak power prediction.

  1. A Supersonic Argon/Air Coaxial Jet Experiment for Computational Fluid Dynamics Code Validation

    NASA Technical Reports Server (NTRS)

    Clifton, Chandler W.; Cutler, Andrew D.

    2007-01-01

    A non-reacting experiment is described in which data has been acquired for the validation of CFD codes used to design high-speed air-breathing engines. A coaxial jet-nozzle has been designed to produce pressure-matched exit flows of Mach 1.8 at 1 atm in both a center jet of argon and a coflow jet of air, creating a supersonic, incompressible mixing layer. The flowfield was surveyed using total temperature, gas composition, and Pitot probes. The data set was compared to CFD code predictions made using Vulcan, a structured grid Navier-Stokes code, as well as to data from a previous experiment in which a He-O2 mixture was used instead of argon in the center jet of the same coaxial jet assembly. Comparison of experimental data from the argon flowfield and its computational prediction shows that the CFD produces an accurate solution for most of the measured flowfield. However, the CFD prediction deviates from the experimental data in the region downstream of x/D = 4, underpredicting the mixing-layer growth rate.

  2. Comparison of Space Shuttle Hot Gas Manifold analysis to air flow data

    NASA Technical Reports Server (NTRS)

    Mcconnaughey, P. K.

    1988-01-01

    This paper summarizes several recent analyses of the Space Shuttle Main Engine Hot Gas Manifold and compares predicted flow environments to air flow data. Codes used in these analyses include INS3D, PAGE, PHOENICS, and VAST. Both laminar (Re = 250, M = 0.30) and turbulent (Re = 1.9 million, M = 0.30) results are discussed, with the latter being compared to data for system losses, outer wall static pressures, and manifold exit Mach number profiles. Comparison of predicted results for the turbulent case to air flow data shows that the analysis using INS3D predicted system losses within 1 percent error, while the PHOENICS, PAGE, and VAST codes erred by 31, 35, and 47 percent, respectively. The INS3D, PHOENICS, and PAGE codes did a reasonable job of predicting outer wall static pressure, while the PHOENICS code predicted exit Mach number profiles with acceptable accuracy. INS3D was approximately an order of magnitude more efficient than the other codes in terms of code speed and memory requirements. In general, it is seen that complex internal flows in manifold-like geometries can be predicted with a limited degree of confidence, and further development is necessary to improve both efficiency and accuracy of codes if they are to be used as design tools for complex three-dimensional geometries.

  3. TRAC-PF1 code verification with data from the OTIS test facility. [Once-Through Intergral System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childerson, M.T.; Fujita, R.K.

    1985-01-01

    A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code wasmore » successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer.« less

  4. Analytical modeling of intumescent coating thermal protection system in a JP-5 fuel fire environment

    NASA Technical Reports Server (NTRS)

    Clark, K. J.; Shimizu, A. B.; Suchsland, K. E.; Moyer, C. B.

    1974-01-01

    The thermochemical response of Coating 313 when exposed to a fuel fire environment was studied to provide a tool for predicting the reaction time. The existing Aerotherm Charring Material Thermal Response and Ablation (CMA) computer program was modified to treat swelling materials. The modified code is now designated Aerotherm Transient Response of Intumescing Materials (TRIM) code. In addition, thermophysical property data for Coating 313 were analyzed and reduced for use in the TRIM code. An input data sensitivity study was performed, and performance tests of Coating 313/steel substrate models were carried out. The end product is a reliable computational model, the TRIM code, which was thoroughly validated for Coating 313. The tasks reported include: generation of input data, development of swell model and implementation in TRIM code, sensitivity study, acquisition of experimental data, comparisons of predictions with data, and predictions with intermediate insulation.

  5. Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.

    2002-01-01

    This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.

  6. Performance of a Light-Weight Ablative Thermal Protection Material for the Stardust Mission Sample Return Capsule

    NASA Technical Reports Server (NTRS)

    Covington, M. A.

    2005-01-01

    New tests and analyses are reported that were carried out to resolve testing uncertainties in the original development and qualification of a lightweight ablative material used for the Stardust spacecraft forebody heat shield. These additional arcjet tests and analyses confirmed the ablative and thermal performance of low density Phenolic Impregnated Carbon Ablator (PICA) material used for the Stardust design. Testing was done under conditions that simulate the peak convective heating conditions (1200 W/cm2 and 0.5 atm) expected during Earth entry of the Stardust Sample Return Capsule. Test data and predictions from an ablative material response computer code for the in-depth temperatures were compared to guide iterative adjustment of material thermophysical properties used in the code so that the measured and predicted temperatures agreed. The PICA recession rates and maximum internal temperatures were satisfactorily predicted by the computer code with the revised properties. Predicted recession rates were also in acceptable agreement with measured rates for heating conditions 37% greater than the nominal peak heating rate of 1200 W/sq cm. The measured in-depth temperature response data show consistent temperature rise deviations that may be caused by an undocumented endothermic process within the PICA material that is not accurately modeled by the computer code. Predictions of the Stardust heat shield performance based on the present evaluation provide evidence that the maximum adhesive bondline temperature will be much lower than the maximum allowable of 250 C and an earlier design prediction. The re-evaluation also suggests that even with a 25 percent increase in peak heating rates, the total recession of the heat shield would be a small fraction of the as-designed thickness. These results give confidence in the Stardust heat shield design and confirm the potential of PICA material for use in new planetary probe and sample return applications.

  7. Incorporation of Dynamic SSI Effects in the Design Response Spectra

    NASA Astrophysics Data System (ADS)

    Manjula, N. K.; Pillai, T. M. Madhavan; Nagarajan, Praveen; Reshma, K. K.

    2018-05-01

    Many studies in the past on dynamic soil-structure interactions have revealed the detrimental and advantageous effects of soil flexibility. Based on such studies, the design response spectra of international seismic codes are being improved worldwide. The improvements required for the short period range of the design response spectra in the Indian seismic code (IS 1893:2002) are presented in this paper. As the recent code revisions has not incorporated the short period amplifications, proposals given in this paper are equally applicable for the latest code also (IS 1893:2016). Analyses of single degree of freedom systems are performed to predict the required improvements. The proposed modifications to the constant acceleration portion of the spectra are evaluated with respect to the current design spectra in Eurocode 8.

  8. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  9. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  10. Coupled Analysis of an Inlet and Fan for a Quiet Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Conners, Timothy R.; Wayman, Thomas R.

    2009-01-01

    A computational analysis of a Gulfstream isentropic external compression supersonic inlet coupled to a Rolls-Royce fan was completed. The inlet was designed for a small, low sonic boom supersonic vehicle with a design cruise condition of M = 1.6 at 45,000 feet. The inlet design included an annular bypass duct that routed flow subsonically around an engine-mounted gearbox and diverted flow with high shock losses away from the fan tip. Two Reynolds-averaged Navier-Stokes codes were used for the analysis: an axisymmetric code called AVCS for the inlet and a 3-D code called SWIFT for the fan. The codes were coupled at a mixing plane boundary using a separate code for data exchange. The codes were used to determine the performance of the inlet/fan system at the design point and to predict the performance and operability of the system over the flight profile. At the design point the core inlet had a recovery of 96 percent, and the fan operated near its peak efficiency and pressure ratio. A large hub radial distortion generated in the inlet was not eliminated by the fan and could pose a challenge for subsequent booster stages. The system operated stably at all points along the flight profile. Reduced stall margin was seen at low altitude and Mach number where flow separated on the interior lips of the cowl and bypass ducts. The coupled analysis gave consistent solutions at all points on the flight profile that would be difficult or impossible to predict by analysis of isolated components.

  11. Coupled Analysis of an Inlet and Fan for a Quiet Supersonic Jet

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Conners, Timothy R.; Wayman, Thomas R.

    2010-01-01

    A computational analysis of a Gulfstream isentropic external compression supersonic inlet coupled to a Rolls-Royce fan has been completed. The inlet was designed for a small, low sonic boom supersonic vehicle with a design cruise condition of M = 1.6 at 45,000 ft. The inlet design included an annular bypass duct that routed flow subsonically around an engine-mounted gearbox and diverted flow with high shock losses away from the fan tip. Two Reynolds-averaged Navier-Stokes codes were used for the analysis: an axisymmetric code called AVCS for the inlet and a three dimensional (3-D) code called SWIFT for the fan. The codes were coupled at a mixing plane boundary using a separate code for data exchange. The codes were used to determine the performance of the inlet/fan system at the design point and to predict the performance and operability of the system over the flight profile. At the design point the core inlet had a recovery of 96 percent, and the fan operated near its peak efficiency and pressure ratio. A large hub radial distortion generated in the inlet was not eliminated by the fan and could pose a challenge for subsequent booster stages. The system operated stably at all points along the flight profile. Reduced stall margin was seen at low altitude and Mach number where flow separated on the interior lips of the cowl and bypass ducts. The coupled analysis gave consistent solutions at all points on the flight profile that would be difficult or impossible to predict by analysis of isolated components.

  12. Application of the RNS3D Code to a Circular-Rectangular Transition Duct With and Without Inlet Swirl and Comparison with Experiments

    NASA Technical Reports Server (NTRS)

    Cavicchi, Richard H.

    1999-01-01

    Circular-rectangular transition ducts are used between engine exhausts and nozzles with rectangular cross sections that are designed for high performance aircraft. NASA Glenn Research Center has made experimental investigations of a series of circular-rectangular transition ducts to provide benchmark flow data for comparison with numerical calculations. These ducts are all designed with superellipse cross sections to facilitate grid generation. In response to this challenge, the three-dimensional RNS3D code has been applied to one of these transition ducts. This particular duct has a length-to-inlet diameter ratio of 1.5 and an exit-plane aspect ratio of 3.0. The inlet Mach number is 0.35. Two GRC experiments and the code were run for this duct without inlet swirl. One GRC experiment and the code were also run with inlet swirl. With no inlet swirl the code was successful in predicting pressures and secondary flow conditions, including a pair of counter-rotating vortices at both sidewalls of the exit plane. All these phenomena have been reported from the two GRC experiments. However, these vortices were suppressed in the one experiment when inlet swirl was used; whereas the RNS3D code still predicted them. The experiment was unable to provide data near the sidewalls, the very region where the vortices were predicted.

  13. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  14. Improved lossless intra coding for H.264/MPEG-4 AVC.

    PubMed

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  15. CFD Modeling of Free-Piston Stirling Engines

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Zhang, Zhi-Guo; Tew, Roy C., Jr.; Gedeon, David; Simon, Terrence W.

    2001-01-01

    NASA Glenn Research Center (GRC) is funding Cleveland State University (CSU) to develop a reliable Computational Fluid Dynamics (CFD) code that can predict engine performance with the goal of significant improvements in accuracy when compared to one-dimensional (1-D) design code predictions. The funding also includes conducting code validation experiments at both the University of Minnesota (UMN) and CSU. In this paper a brief description of the work-in-progress is provided in the two areas (CFD and Experiments). Also, previous test results are compared with computational data obtained using (1) a 2-D CFD code obtained from Dr. Georg Scheuerer and further developed at CSU and (2) a multidimensional commercial code CFD-ACE+. The test data and computational results are for (1) a gas spring and (2) a single piston/cylinder with attached annular heat exchanger. The comparisons among the codes are discussed. The paper also discusses plans for conducting code validation experiments at CSU and UMN.

  16. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, andmore » data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.« less

  17. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    NASA Astrophysics Data System (ADS)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  18. Quiet High Speed Fan (QHSF) Flutter Calculations Using the TURBO Code

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Min, James B.; Mehmed, Oral

    2006-01-01

    A scale model of the NASA/Honeywell Engines Quiet High Speed Fan (QHSF) encountered flutter wind tunnel testing. This report documents aeroelastic calculations done for the QHSF scale model using the blade vibration capability of the TURBO code. Calculations at design speed were used to quantify the effect of numerical parameters on the aerodynamic damping predictions. This numerical study allowed the selection of appropriate values of these parameters, and also allowed an assessment of the variability in the calculated aerodynamic damping. Calculations were also done at 90 percent of design speed. The predicted trends in aerodynamic damping corresponded to those observed during testing.

  19. Parametric bicubic spline and CAD tools for complex targets shape modelling in physical optics radar cross section prediction

    NASA Astrophysics Data System (ADS)

    Delogu, A.; Furini, F.

    1991-09-01

    Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.

  20. Pretest predictions of the Fast Flux Test Facility Passive Safety Test Phase IIB transients using United States derived computer codes and methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heard, F.J.; Harris, R.A.; Padilla, A.

    The SASSYS/SAS4A systems analysis code was used to simulate a series of unprotected loss of flow (ULOF) tests planned at the Fast Flux Test Facility (FFTF). The subject tests were designed to investigate the transient performance of the FFTF during various ULOF scenarios for two different loading patterns designed to produce extremes in the assembly load pad clearance and the direction of the initial assembly bows. The tests are part of an international program designed to extend the existing data base on the performance of liquid metal reactors (LMR). The analyses demonstrate that a wide range of power-to-flow ratios canmore » be reached during the transients and, therefore, will yield valuable data on the dynamic character of the structural feedbacks in LMRS. These analyses will be repeated once the actual FFTF core loadings for the tests are available. These predictions, similar ones obtained by other international participants in the FFTF program, and post-test analyses will be used to upgrade and further verify the computer codes used to predict the behavior of LMRS.« less

  1. Manual for the Student Attributes Coding System.

    ERIC Educational Resources Information Center

    Brophy, Jere E.; And Others

    The Student Attributes Coding System has been developed for gathering data about the personal characteristics and classroom behavior of elementary school students selected for observation because they engender predictable attitudes and expectations in their teachers. This system is designed to systematically record and categorize all interactions…

  2. Spray combustion experiments and numerical predictions

    NASA Technical Reports Server (NTRS)

    Mularz, Edward J.; Bulzan, Daniel L.; Chen, Kuo-Huey

    1993-01-01

    The next generation of commercial aircraft will include turbofan engines with performance significantly better than those in the current fleet. Control of particulate and gaseous emissions will also be an integral part of the engine design criteria. These performance and emission requirements present a technical challenge for the combustor: control of the fuel and air mixing and control of the local stoichiometry will have to be maintained much more rigorously than with combustors in current production. A better understanding of the flow physics of liquid fuel spray combustion is necessary. This paper describes recent experiments on spray combustion where detailed measurements of the spray characteristics were made, including local drop-size distributions and velocities. Also, an advanced combustor CFD code has been under development and predictions from this code are compared with experimental results. Studies such as these will provide information to the advanced combustor designer on fuel spray quality and mixing effectiveness. Validation of new fast, robust, and efficient CFD codes will also enable the combustor designer to use them as additional design tools for optimization of combustor concepts for the next generation of aircraft engines.

  3. Comparison Between Predicted and Experimentally Measured Flow Fields at the Exit of the SSME HPFTP Impeller

    NASA Technical Reports Server (NTRS)

    Bache, George

    1993-01-01

    Validation of CFD codes is a critical first step in the process of developing CFD design capability. The MSFC Pump Technology Team has recognized the importance of validation and has thus funded several experimental programs designed to obtain CFD quality validation data. The first data set to become available is for the SSME High Pressure Fuel Turbopump Impeller. LDV Data was taken at the impeller inlet (to obtain a reliable inlet boundary condition) and three radial positions at the impeller discharge. Our CFD code, TASCflow, is used within the Propulsion and Commercial Pump industry as a tool for pump design. The objective of this work, therefore, is to further validate TASCflow for application in pump design. TASCflow was used to predict flow at the impeller discharge for flowrates of 80, 100 and 115 percent of design flow. Comparison to data has been made with encouraging results.

  4. Stagnation-point heat-transfer rate predictions at aeroassist flight conditions

    NASA Technical Reports Server (NTRS)

    Gupta, Roop N.; Jones, Jim J.; Rochelle, William C.

    1992-01-01

    The results are presented for the stagnation-point heat-transfer rates used in the design process of the Aeroassist Flight Experiment (AFE) vehicle over its entire aeropass trajectory. The prediction methods used in this investigation demonstrate the application of computational fluid dynamics (CFD) techniques to a wide range of flight conditions and their usefulness in a design process. The heating rates were computed by a viscous-shock-layer (VSL) code at the lower altitudes and by a Navier-Stokes (N-S) code for the higher altitude cases. For both methods, finite-rate chemically reacting gas was considered, and a temperature-dependent wall-catalysis model was used. The wall temperature for each case was assumed to be radiative equilibrium temperature, based on total heating. The radiative heating was estimated by using a correlation equation. Wall slip was included in the N-S calculation method, and this method implicitly accounts for shock slip. The N-S/VSL combination of projection methods was established by comparison with the published benchmark flow-field code LAURA results at lower altitudes, and the direct simulation Monte Carlo results at higher altitude cases. To obtain the design heating rate over the entire forward face of the vehicle, a boundary-layer method (BLIMP code) that employs reacting chemistry and surface catalysis was used. The ratio of the VSL or N-S method prediction to that obtained from the boundary-layer method code at the stagnation point is used to define an adjustment factor, which accounts for the errors involved in using the boundary-layer method.

  5. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  6. Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2001-01-01

    A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.

  7. Simplified APC for Space Shuttle applications. [Adaptive Predictive Coding for speech transmission

    NASA Technical Reports Server (NTRS)

    Hutchins, S. E.; Batson, B. H.

    1975-01-01

    This paper describes an 8 kbps adaptive predictive digital speech transmission system which was designed for potential use in the Space Shuttle Program. The system was designed to provide good voice quality in the presence of both cabin noise on board the Shuttle and the anticipated bursty channel. Minimal increase in size, weight, and power over the current high data rate system was also a design objective.

  8. Boredom begets creativity: A solution to the exploitation-exploration trade-off in predictive coding.

    PubMed

    Gomez-Ramirez, Jaime; Costa, Tommaso

    2017-12-01

    Here we investigate whether systems that minimize prediction error e.g. predictive coding, can also show creativity, or on the contrary, prediction error minimization unqualifies for the design of systems that respond in creative ways to non-recurrent problems. We argue that there is a key ingredient that has been overlooked by researchers that needs to be incorporated to understand intelligent behavior in biological and technical systems. This ingredient is boredom. We propose a mathematical model based on the Black-Scholes-Merton equation which provides mechanistic insights into the interplay between boredom and prediction pleasure as the key drivers of behavior. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Three-dimensional turbopump flowfield analysis

    NASA Technical Reports Server (NTRS)

    Sharma, O. P.; Belford, K. A.; Ni, R. H.

    1992-01-01

    A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.

  10. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  11. Ascent Aerodynamic Pressure Distributions on WB001

    NASA Technical Reports Server (NTRS)

    Vu, B.; Ruf, J.; Canabal, F.; Brunty, J.

    1996-01-01

    To support the reusable launch vehicle concept study, the aerodynamic data and surface pressure for WB001 were predicted using three computational fluid dynamic (CFD) codes at several flow conditions between code to code and code to aerodynamic database as well as available experimental data. A set of particular solutions have been selected and recommended for use in preliminary conceptual designs. These computational fluid dynamic (CFD) results have also been provided to the structure group for wing loading analysis.

  12. Euler Calculations at Off-Design Conditions for an Inlet of Inward Turning RBCC-SSTO Vehicle

    NASA Technical Reports Server (NTRS)

    Takashima, N.; Kothari, A. P.

    1998-01-01

    The inviscid performance of an inward turning inlet design is calculated computationally for the first time. Hypersonic vehicle designs based on the inward turning inlets have been shown analytically to have increased effective specific impulse and lower heat load than comparably designed vehicles with two-dimensional inlets. The inward turning inlets are designed inversely from inviscid stream surfaces of known flow fields. The computational study is performed on a Mach 12 inlet design to validate the performance predicted by the design code (HAVDAC) and calculate its off-design Mach number performance. The three-dimensional Euler equations are solved for Mach 4, 8, and 12 using a software package called SAM, which consists of an unstructured mesh generator (SAMmesh), a three-dimensional unstructured mesh flow solver (SAMcfd), and a CAD-based software (SAMcad). The computed momentum averaged inlet throat pressure is within 6% of the design inlet throat pressure. The mass-flux at the inlet throat is also within 7 % of the value predicted by the design code thereby validating the accuracy of the design code. The off-design Mach number results show that flow spillage is minimal, and the variation in the mass capture ratio with Mach number is comparable to an ideal 2-D inlet. The results from the inviscid flow calculations of a Mach 12 inward turning inlet indicate that the inlet design has very good on and off-design performance which makes it a promising design candidate for future air-breathing hypersonic vehicles.

  13. Predicting Cavitation on Marine and Hydrokinetic Turbine Blades with AeroDyn V15.04

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Robynne

    Cavitation is an important consideration in the design of marine and hydrokinetic (MHK) turbines. The National Renewable Energy Laboratory's AeroDyn performance code was originally developed for horizontal-axis wind turbines and did not have the capability to predict cavitation inception. Therefore, AeroDyn has been updated to include the ability to predict cavitation on MHK turbines based on user-specified vapor pressure and submerged depth. This report outlines a verification of the AeroDyn V15.04 performance code for MHK turbines through a comparison to publicly available performance data.

  14. An Assessment of Current Fan Noise Prediction Capability

    NASA Technical Reports Server (NTRS)

    Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.

    2008-01-01

    In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.

  15. Potential flow analysis of glaze ice accretions on an airfoil

    NASA Technical Reports Server (NTRS)

    Zaguli, R. J.

    1984-01-01

    The results of an analytical/experimental study of the flow fields about an airfoil with leading edge glaze ice accretion shapes are presented. Tests were conducted in the Icing Research Tunnel to measure surface pressure distributions and boundary layer separation reattachment characteristics on a general aviation wing section to which was affixed wooden ice shapes which approximated typical glaze ice accretions. Comparisons were made with predicted pressure distributions using current airfoil analysis codes as well as the Bristow mixed analysis/design airfoil panel code. The Bristow code was also used to predict the separation reattachment dividing streamline by inputting the appropriate experimental surface pressure distribution.

  16. Experimental prediction of tube support interaction characteristics in steam generators: Volume 2, Westinghouse Model 51 flow entrance region: Topical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haslinger, K.H.

    Tube-to-tube support interaction characterisitics were determined experimentally on a single tube, multi-span geometry, representative of the Westinghouse Model 51 steam generator economizer design. Results, in part, became input for an autoclave type wear test program on steam generator tubes, performed by Kraftwerk Union (KWU). More importantly, the test data reported here have been used to validate two analytical wear prediction codes; the WECAN code, which was developed by Westinghouse, and the ABAQUS code which has been enhanced for EPRI by Foster Wheeler to enable simulation of gap conditions (including fluid film effects) for various support geometries.

  17. CFD Modeling of Launch Vehicle Aerodynamic Heating

    NASA Technical Reports Server (NTRS)

    Tashakkor, Scott B.; Canabal, Francisco; Mishtawy, Jason E.

    2011-01-01

    The Loci-CHEM 3.2 Computational Fluid Dynamics (CFD) code is being used to predict Ares-I launch vehicle aerodynamic heating. CFD has been used to predict both ascent and stage reentry environments and has been validated against wind tunnel tests and the Ares I-X developmental flight test. Most of the CFD predictions agreed with measurements. On regions where mismatches occurred, the CFD predictions tended to be higher than measured data. These higher predictions usually occurred in complex regions, where the CFD models (mainly turbulence) contain less accurate approximations. In some instances, the errors causing the over-predictions would cause locations downstream to be affected even though the physics were still being modeled properly by CHEM. This is easily seen when comparing to the 103-AH data. In the areas where predictions were low, higher grid resolution often brought the results closer to the data. Other disagreements are attributed to Ares I-X hardware not being present in the grid, as a result of computational resources limitations. The satisfactory predictions from CHEM provide confidence that future designs and predictions from the CFD code will provide an accurate approximation of the correct values for use in design and other applications

  18. Statistical Analysis of CFD Solutions from the Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    2002-01-01

    A simple, graphical framework is presented for robust statistical evaluation of results obtained from N-Version testing of a series of RANS CFD codes. The solutions were obtained by a variety of code developers and users for the June 2001 Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration used for the computational tests is the DLR-F4 wing-body combination previously tested in several European wind tunnels and for which a previous N-Version test had been conducted. The statistical framework is used to evaluate code results for (1) a single cruise design point, (2) drag polars and (3) drag rise. The paper concludes with a discussion of the meaning of the results, especially with respect to predictability, Validation, and reporting of solutions.

  19. STGSTK- PREDICTING MULTISTAGE AXIAL-FLOW COMPRESSOR PERFORMANCE BY A MEANLINE STAGE-STACKING METHOD

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1994-01-01

    The STGSTK computer program was developed for predicting the off-design performance of multistage axial-flow compressors. The axial-flow compressor is widely used in aircraft engines. In addition to its inherent advantage of high mass flow per frontal area, it can exhibit very good aerodynamic performance. However, good aerodynamic performance over an acceptable range of operating conditions is not easily attained. STGSTK provides an analytical tool for the development of new compressor designs. The simplicity of a one-dimensional compressible flow model enables the stage-stacking method used in STGSTK to have excellent convergence properties and short computer run times. Also, the simplicity of the model makes STGSTK a manageable code that eases the incorporation, or modification, of empirical correlations directly linked to test data. Thus, the user can adapt the code to meet varying design needs. STGSTK uses a meanline stage-stacking method to predict off-design performance. Stage and cumulative compressor performance is calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. STGSTK includes options for the following: 1) non-dimensional stage characteristics may be input directly or calculated from stage design performance input, 2) stage characteristics may be modified for off-design speed and blade reset, and 3) rotor design deviation angle may be modified for off-design flow, speed, and blade setting angle. Many of the code's options use correlations that are normally obtained from experimental data. The STGSTK user may modify these correlations as needed. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 85K of 8 bit bytes. STGSTK was developed in 1982.

  20. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  1. Numerical Prediction of Non-Reacting and Reacting Flow in a Model Gas Turbine Combustor

    NASA Technical Reports Server (NTRS)

    Davoudzadeh, Farhad; Liu, Nan-Suey

    2005-01-01

    The three-dimensional, viscous, turbulent, reacting and non-reacting flow characteristics of a model gas turbine combustor operating on air/methane are simulated via an unstructured and massively parallel Reynolds-Averaged Navier-Stokes (RANS) code. This serves to demonstrate the capabilities of the code for design and analysis of real combustor engines. The effects of some design features of combustors are examined. In addition, the computed results are validated against experimental data.

  2. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  3. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  4. Multidisciplinary Modeling Software for Analysis, Design, and Optimization of HRRLS Vehicles

    NASA Technical Reports Server (NTRS)

    Spradley, Lawrence W.; Lohner, Rainald; Hunt, James L.

    2011-01-01

    The concept for Highly Reliable Reusable Launch Systems (HRRLS) under the NASA Hypersonics project is a two-stage-to-orbit, horizontal-take-off / horizontal-landing, (HTHL) architecture with an air-breathing first stage. The first stage vehicle is a slender body with an air-breathing propulsion system that is highly integrated with the airframe. The light weight slender body will deflect significantly during flight. This global deflection affects the flow over the vehicle and into the engine and thus the loads and moments on the vehicle. High-fidelity multi-disciplinary analyses that accounts for these fluid-structures-thermal interactions are required to accurately predict the vehicle loads and resultant response. These predictions of vehicle response to multi physics loads, calculated with fluid-structural-thermal interaction, are required in order to optimize the vehicle design over its full operating range. This contract with ResearchSouth addresses one of the primary objectives of the Vehicle Technology Integration (VTI) discipline: the development of high-fidelity multi-disciplinary analysis and optimization methods and tools for HRRLS vehicles. The primary goal of this effort is the development of an integrated software system that can be used for full-vehicle optimization. This goal was accomplished by: 1) integrating the master code, FEMAP, into the multidiscipline software network to direct the coupling to assure accurate fluid-structure-thermal interaction solutions; 2) loosely-coupling the Euler flow solver FEFLO to the available and proven aeroelasticity and large deformation (FEAP) code; 3) providing a coupled Euler-boundary layer capability for rapid viscous flow simulation; 4) developing and implementing improved Euler/RANS algorithms into the FEFLO CFD code to provide accurate shock capturing, skin friction, and heat-transfer predictions for HRRLS vehicles in hypersonic flow, 5) performing a Reynolds-averaged Navier-Stokes computation on an HRRLS configuration; 6) integrating the RANS solver with the FEAP code for coupled fluid-structure-thermal capability; and 7) integrating the existing NASA SRGULL propulsion flow path prediction software with the FEFLO software for quasi-3D propulsion flow path predictions, 8) improving and integrating into the network, an existing adjoint-based design optimization code.

  5. VS30 – A site-characterization parameter for use in building Codes, simplified earthquake resistant design, GMPEs, and ShakeMaps

    USGS Publications Warehouse

    Borcherdt, Roger D.

    2012-01-01

    VS30, defined as the average seismic shear-wave velocity from the surface to a depth of 30 meters, has found wide-spread use as a parameter to characterize site response for simplified earthquake resistant design as implemented in building codes worldwide. VS30 , as initially introduced by the author for the US 1994 NEHRP Building Code, provides unambiguous definitions of site classes and site coefficients for site-dependent response spectra based on correlations derived from extensive borehole logging and comparative ground-motion measurement programs in California. Subsequent use of VS30 for development of strong ground motion prediction equations (GMPEs) and measurement of extensive sets of VS borehole data have confirmed the previous empirical correlations and established correlations of SVS30 with VSZ at other depths. These correlations provide closed form expressions to predict S30 V at a large number of additional sites and further justify S30 V as a parameter to characterize site response for simplified building codes, GMPEs, ShakeMap, and seismic hazard mapping.

  6. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  7. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  8. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.

    2014-10-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.

  9. Aerodynamic and heat transfer analysis of the low aspect ratio turbine

    NASA Astrophysics Data System (ADS)

    Sharma, O. P.; Nguyen, P.; Ni, R. H.; Rhie, C. M.; White, J. A.

    1987-06-01

    The available two- and three-dimensional codes are used to estimate external heat loads and aerodynamic characteristics of a highly loaded turbine stage in order to demonstrate state-of-the-art methodologies in turbine design. By using data for a low aspect ratio turbine, it is found that a three-dimensional multistage Euler code gives good averall predictions for the turbine stage, yielding good estimates of the stage pressure ratio, mass flow, and exit gas angles. The nozzle vane loading distribution is well predicted by both the three-dimensional multistage Euler and three-dimensional Navier-Stokes codes. The vane airfoil surface Stanton number distributions, however, are underpredicted by both two- and three-dimensional boundary value analysis.

  10. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  11. Stirling cryocooler test results and design model verification

    NASA Astrophysics Data System (ADS)

    Shimko, Martin A.; Stacy, W. D.; McCormick, John A.

    A long-life Stirling cycle cryocooler being developed for spaceborne applications is described. The results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the generator design code used in its development are presented. This machine achieved a cold-end temperature of 65 K while carrying a 1/2-W cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for sweeping and sealing the machine working volumes. The double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. The PC-compatible design code developed for this design approach calculates regenerator loss, including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls. The code accurately predicted cooler performance and assisted in diagnosing breadboard machine flaws during shakedown and development testing.

  12. The WISGSK: A computer code for the prediction of a multistage axial compressor performance with water ingestion

    NASA Technical Reports Server (NTRS)

    Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.

  13. The STD/MHD codes - Comparison of analyses with experiments at AEDC/HPDE, Reynolds Metal Co., and Hercules, Inc. [for MHD generator flows

    NASA Technical Reports Server (NTRS)

    Vetter, A. A.; Maxwell, C. D.; Swean, T. F., Jr.; Demetriades, S. T.; Oliver, D. A.; Bangerter, C. D.

    1981-01-01

    Data from sufficiently well-instrumented, short-duration experiments at AEDC/HPDE, Reynolds Metal Co., and Hercules, Inc., are compared to analyses with multidimensional and time-dependent simulations with the STD/MHD computer codes. These analyses reveal detailed features of major transient events, severe loss mechanisms, and anomalous MHD behavior. In particular, these analyses predicted higher-than-design voltage drops, Hall voltage overshoots, and asymmetric voltage drops before the experimental data were available. The predictions obtained with these analyses are in excellent agreement with the experimental data and the failure predictions are consistent with the experiments. The design of large, high-interaction or advanced MHD experiments will require application of sophisticated, detailed and comprehensive computational procedures in order to account for the critical mechanisms which led to the observed behavior in these experiments.

  14. Project summaries

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Lunar base projects, including a reconfigurable lunar cargo launcher, a thermal and micrometeorite protection system, a versatile lifting machine with robotic capabilities, a cargo transport system, the design of a road construction system for a lunar base, and the design of a device for removing lunar dust from material surfaces, are discussed. The emphasis on the Gulf of Mexico project was on the development of a computer simulation model for predicting vessel station keeping requirements. An existing code, used in predicting station keeping requirements for oil drilling platforms operating in North Shore (Alaska) waters was used as a basis for the computer simulation. Modifications were made to the existing code. The input into the model consists of satellite altimeter readings and water velocity readings from buoys stationed in the Gulf of Mexico. The satellite data consists of altimeter readings (wave height) taken during the spring of 1989. The simulation model predicts water velocity and direction, and wind velocity.

  15. Developments in REDES: The rocket engine design expert system

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) is being developed at the NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP, a nozzle design program named RAO, a regenerative cooling channel performance evaluation code named RTE, and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES is built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  16. Developments in REDES: The Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth O.

    1990-01-01

    The Rocket Engine Design Expert System (REDES) was developed at NASA-Lewis to collect, automate, and perpetuate the existing expertise of performing a comprehensive rocket engine analysis and design. Currently, REDES uses the rigorous JANNAF methodology to analyze the performance of the thrust chamber and perform computational studies of liquid rocket engine problems. The following computer codes were included in REDES: a gas properties program named GASP; a nozzle design program named RAO; a regenerative cooling channel performance evaluation code named RTE; and the JANNAF standard liquid rocket engine performance prediction code TDK (including performance evaluation modules ODE, ODK, TDE, TDK, and BLM). Computational analyses are being conducted by REDES to provide solutions to liquid rocket engine thrust chamber problems. REDES was built in the Knowledge Engineering Environment (KEE) expert system shell and runs on a Sun 4/110 computer.

  17. Magnetohydrodynamic modelling of exploding foil initiators

    NASA Astrophysics Data System (ADS)

    Neal, William

    2015-06-01

    Magnetohydrodynamic (MHD) codes are currently being developed, and used, to predict the behaviour of electrically-driven flyer-plates. These codes are of particular interest to the design of exploding foil initiator (EFI) detonators but there is a distinct lack of comparison with high-fidelity experimental data. This study aims to compare a MHD code with a collection of temporally and spatially resolved diagnostics including PDV, dual-axis imaging and streak imaging. The results show the code's excellent representation of the flyer-plate launch and highlight features within the experiment that the model fails to capture.

  18. Condensation model for the ESBWR passive condensers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Revankar, S. T.; Zhou, W.; Wolf, B.

    2012-07-01

    In the General Electric's Economic simplified boiling water reactor (GE-ESBWR) the passive containment cooling system (PCCS) plays a major role in containment pressure control in case of an loss of coolant accident. The PCCS condenser must be able to remove sufficient energy from the reactor containment to prevent containment from exceeding its design pressure following a design basis accident. There are three PCCS condensation modes depending on the containment pressurization due to coolant discharge; complete condensation, cyclic venting and flow through mode. The present work reviews the models and presents model predictive capability along with comparison with existing data frommore » separate effects test. The condensation models in thermal hydraulics code RELAP5 are also assessed to examine its application to various flow modes of condensation. The default model in the code predicts complete condensation well, and basically is Nusselt solution. The UCB model predicts through flow well. None of condensation model in RELAP5 predict complete condensation, cyclic venting, and through flow condensation consistently. New condensation correlations are given that accurately predict all three modes of PCCS condensation. (authors)« less

  19. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  20. Numerical simulation of jet aerodynamics using the three-dimensional Navier-Stokes code PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Abdol-Hamid, Khaled S.

    1996-01-01

    This report presents a unified method for subsonic and supersonic jet analysis using the three-dimensional Navier-Stokes code PAB3D. The Navier-Stokes code was used to obtain solutions for axisymmetric jets with on-design operating conditions at Mach numbers ranging from 0.6 to 3.0, supersonic jets containing weak shocks and Mach disks, and supersonic jets with nonaxisymmetric nozzle exit geometries. This report discusses computational methods, code implementation, computed results, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions. The Navier-Stokes method using the standard Jones-Launder two-equation kappa-epsilon turbulence model can accurately predict jet flow, and such predictions are made without any modification to the published constants for the turbulence model.

  1. Application of an airfoil stall flutter computer prediction program to a three-dimensional wing: Prediction versus experiment. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Muffoletto, A. J.

    1982-01-01

    An aerodynamic computer code, capable of predicting unsteady and C sub m values for an airfoil undergoing dynamic stall, is used to predict the amplitudes and frequencies of a wing undergoing torsional stall flutter. The code, developed at United Technologies Research Corporation (UTRC), is an empirical prediction method designed to yield unsteady values of normal force and moment, given the airfoil's static coefficient characteristics and the unsteady aerodynamic values, alpha, A and B. In this experiment, conducted in the PSU 4' x 5' subsonic wind tunnel, the wing's elastic axis, torsional spring constant and initial angle of attack are varied, and the oscillation amplitudes and frequencies of the wing, while undergoing torsional stall flutter, are recorded. These experimental values show only fair comparisons with the predicted responses. Predictions tend to be good at low velocities and rather poor at higher velocities.

  2. Is phonology bypassed in normal or dyslexic development?

    PubMed

    Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D

    1987-01-01

    A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill only in adulthood.

  3. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE PAGES

    Ilas, Germina; Liljenfeldt, Henrik

    2017-05-19

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  4. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Liljenfeldt, Henrik

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  5. Acoustic Prediction State of the Art Assessment

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2007-01-01

    The acoustic assessment task for both the Subsonic Fixed Wing and the Supersonic projects under NASA s Fundamental Aeronautics Program was designed to assess the current state-of-the-art in noise prediction capability and to establish baselines for gauging future progress. The documentation of our current capabilities included quantifying the differences between predictions of noise from computer codes and measurements of noise from experimental tests. Quantifying the accuracy of both the computed and experimental results further enhanced the credibility of the assessment. This presentation gives sample results from codes representative of NASA s capabilities in aircraft noise prediction both for systems and components. These include semi-empirical, statistical, analytical, and numerical codes. System level results are shown for both aircraft and engines. Component level results are shown for a landing gear prototype, for fan broadband noise, for jet noise from a subsonic round nozzle, and for propulsion airframe aeroacoustic interactions. Additional results are shown for modeling of the acoustic behavior of duct acoustic lining and the attenuation of sound in lined ducts with flow.

  6. Design of a double-anode magnetron-injection gun for the W-band gyrotron

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Ho; Choi, Jin Joo; So, Joon Ho

    2015-07-01

    A double-anode magnetron-injection gun (MIG) was designed. The MIG is for a W-band 10-kW gyrotron. Analytic equations based on adiabatic theory and angular momentum conservation were used to examine the initial design parameters such as the cathode angle, and the radius of the beam emitting surface. The MIG's performances were predicted by using an electron trajectory code, the EGUN code. The beam spread of the axial velocity, Δvz/vz, obtained from the EGUN code was observed to be 1.34% at α = 1.3. The cathode edge emission and the thermal effect were modeled. The cathode edge emission was found to have a major effect on the velocity spread. The electron beam's quality was significantly improved by affixing non-emissive cylinders to the cathode.

  7. Propagation of Computational Uncertainty Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2007-01-01

    This paper describes the use of formally designed experiments to aid in the error analysis of a computational experiment. A method is described by which the underlying code is approximated with relatively low-order polynomial graduating functions represented by truncated Taylor series approximations to the true underlying response function. A resource-minimal approach is outlined by which such graduating functions can be estimated from a minimum number of case runs of the underlying computational code. Certain practical considerations are discussed, including ways and means of coping with high-order response functions. The distributional properties of prediction residuals are presented and discussed. A practical method is presented for quantifying that component of the prediction uncertainty of a computational code that can be attributed to imperfect knowledge of independent variable levels. This method is illustrated with a recent assessment of uncertainty in computational estimates of Space Shuttle thermal and structural reentry loads attributable to ice and foam debris impact on ascent.

  8. Investigation of the Flow Physics Driving Stall-Side Flutter in Advanced Forward Swept Fan Designs

    NASA Technical Reports Server (NTRS)

    Sanders, Albert J.; Liu, Jong S.; Panovsky, Josef; Bakhle, Milind A.; Stefko, George; Srivastava, Rakesh

    2003-01-01

    Flutter-free operation of advanced transonic fan designs continues to be a challenging task for the designers of aircraft engines. In order to meet the demands of increased performance and lighter weight, these modern fan designs usually feature low-aspect ratio shroudless rotor blade designs that make the task of achieving adequate flutter margin even more challenging for the aeroelastician. This is especially true for advanced forward swept designs that encompass an entirely new design space compared to previous experience. Fortunately, advances in unsteady computational fluid dynamic (CFD) techniques over the past decade now provide an analysis capability that can be used to quantitatively assess the aeroelastic characteristics of these next generation fans during the design cycle. For aeroelastic applications, Mississippi State University and NASA Glenn Research Center have developed the CFD code TURBO-AE. This code is a time-accurate three-dimensional Euler/Navier-Stokes unsteady flow solver developed for axial-flow turbomachinery that can model multiple blade rows undergoing harmonic oscillations with arbitrary interblade phase angles, i.e., nodal diameter patterns. Details of the code can be found in Chen et al. (1993, 1994), Bakhle et al. (1997, 1998), and Srivastava et al. (1999). To assess aeroelastic stability, the work-per-cycle from TURBO-AE is converted to the critical damping ratio since this value is more physically meaningful, with both the unsteady normal pressure and viscous shear forces included in the work-per-cycle calculation. If the total damping (aerodynamic plus mechanical) is negative, then the blade is unstable since it extracts energy from the flow field over the vibration cycle. TURBO-AE is an integral part of an aeroelastic design system being developed at Honeywell Engines, Systems & Services for flutter and forced response predictions, with test cases from development rig and engine tests being used to validate its predictive capability. A recent experimental program (Sanders et al., 2002) was aimed at providing the necessary unsteady aerodynamic and vibratory response data needed to validate TURBO-AE for fan flutter predictions. A comparison of numerical TURBO-AE simulations with the benchmark flutter data is given in Sanders et al. (2003), with the data used to guide the validation of the code and define best practices for performing accurate unsteady simulations. The agreement between the analyses and the predictions was quite remarkable, demonstrating the ability of the analysis to accurately model the unsteady flow processes driving stall-side flutter.

  9. Mass transfer effects in a gasification riser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breault, Ronald W.; Li, Tingwen; Nicoletti, Phillip

    2013-07-01

    In the development of multiphase reacting computational fluid dynamics (CFD) codes, a number of simplifications were incorporated into the codes and models. One of these simplifications was the use of a simplistic mass transfer correlation for the faster reactions and omission of mass transfer effects completely on the moderate speed and slow speed reactions such as those in a fluidized bed gasifier. Another problem that has propagated is that the mass transfer correlation used in the codes is not universal and is being used far from its developed bubbling fluidized bed regime when applied to circulating fluidized bed (CFB) risermore » reactors. These problems are true for the major CFD codes. To alleviate this problem, a mechanistic based mass transfer coefficient algorithm has been developed based upon an earlier work by Breault et al. This fundamental approach uses the local hydrodynamics to predict a local, time varying mass transfer coefficient. The predicted mass transfer coefficients and the corresponding Sherwood numbers agree well with literature data and are typically about an order of magnitude lower than the correlation noted above. The incorporation of the new mass transfer model gives the expected behavior for all the gasification reactions evaluated in the paper. At the expected and typical design values for the solid flow rate in a CFB riser gasifier an ANOVA analysis has shown the predictions from the new code to be significantly different from the original code predictions. The new algorithm should be used such that the conversions are not over predicted. Additionally, its behaviors with changes in solid flow rate are consistent with the changes in the hydrodynamics.« less

  10. Efficient, Multi-Scale Designs Take Flight

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Engineers can solve aerospace design problems faster and more efficiently with a versatile software product that performs automated structural analysis and sizing optimization. Collier Research Corporation's HyperSizer Structural Sizing Software is a design, analysis, and documentation tool that increases productivity and standardization for a design team. Based on established aerospace structural methods for strength, stability, and stiffness, HyperSizer can be used all the way from the conceptual design to in service support. The software originated from NASA s efforts to automate its capability to perform aircraft strength analyses, structural sizing, and weight prediction and reduction. With a strategy to combine finite element analysis with an automated design procedure, NASA s Langley Research Center led the development of a software code known as ST-SIZE from 1988 to 1995. Collier Research employees were principal developers of the code along with Langley researchers. The code evolved into one that could analyze the strength and stability of stiffened panels constructed of any material, including light-weight, fiber-reinforced composites.

  11. Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey; Lepsch, Roger A., Jr.

    2000-01-01

    Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.

  12. Edge-diffraction effects in RCS predictions and their importance in systems analysis

    NASA Astrophysics Data System (ADS)

    Friess, W. F.; Klement, D.; Ruppel, M.; Stein, Volker

    1996-06-01

    In developing RCS prediction codes a variety of physical effects such as the edge diffraction effect have to be considered with the consequence that the computer effort increases considerably. This fact limits the field of application of such codes, especially if the RCS data serve as input parameters for system simulators which very often need these data for a high number of observation angles and/or frequencies. Vice versa the issues of a system analysis can be used to estimate the relevance of physical effects under system viewpoints and to rank them according to their magnitude. This paper tries to evaluate the importance of RCS predictions containing an edge diffracted field for systems analysis. A double dihedral with a strong depolarizing behavior and a generic airplane design containing many arbitrarily oriented edges are used as test structures. Data of the scattered field are generated by the RCS computer code SIGMA with and without including edge diffraction effects. These data are submitted to the code DORA to determine radar range and radar detectibility and to a SAR simulator code to generate SAR imagery. In both cases special scenarios are assumed. The essential features of the computer codes in their current state are described, the results are presented and discussed under systems viewpoints.

  13. Broadband transmission-type coding metamaterial for wavefront manipulation for airborne sound

    NASA Astrophysics Data System (ADS)

    Li, Kun; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-chun

    2018-07-01

    The recent advent of coding metamaterials, as a new class of acoustic metamaterials, substantially reduces the complexity in the design and fabrication of acoustic functional devices capable of manipulating sound waves in exotic manners by arranging coding elements with discrete phase states in specific sequences. It is therefore intriguing, both physically and practically, to pursue a mechanism for realizing broadband acoustic coding metamaterials that control transmitted waves with a fine resolution of the phase profile. Here, we propose the design of a transmission-type acoustic coding device and demonstrate its metamaterial-based implementation. The mechanism is that, instead of relying on resonant coding elements that are necessarily narrow-band, we build weak-resonant coding elements with a helical-like metamaterial with a continuously varying pitch that effectively expands the working bandwidth while maintaining the sub-wavelength resolution of the phase profile that is vital for the production of complicated wave fields. The effectiveness of our proposed scheme is numerically verified via the demonstration of three distinctive examples of acoustic focusing, anomalous refraction, and vortex beam generation in the prescribed frequency band on the basis of 1- and 2-bit coding sequences. Simulation results agree well with theoretical predictions, showing that the designed coding devices with discrete phase profiles are efficient in engineering the wavefront of outcoming waves to form the desired spatial pattern. We anticipate the realization of coding metamaterials with broadband functionality and design flexibility to open up possibilities for novel acoustic functional devices for the special manipulation of transmitted waves and underpin diverse applications ranging from medical ultrasound imaging to acoustic detections.

  14. Dual Coding Theory and Computer Education: Some Media Experiments To Examine the Effects of Different Media on Learning.

    ERIC Educational Resources Information Center

    Alty, James L.

    Dual Coding Theory has quite specific predictions about how information in different media is stored, manipulated and recalled. Different combinations of media are expected to have significant effects upon the recall and retention of information. This obviously may have important consequences in the design of computer-based programs. The paper…

  15. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  16. Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold

    1997-01-01

    The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.

  17. Intrasystem Analysis Program (IAP) code summaries

    NASA Astrophysics Data System (ADS)

    Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.

    1983-05-01

    This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.

  18. Mean Flow and Noise Prediction for a Separate Flow Jet With Chevron Mixers

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Khavaran, Abbas

    2004-01-01

    Experimental and numerical results are presented here for a separate flow nozzle employing chevrons arranged in an alternating pattern on the core nozzle. Comparisons of these results demonstrate that the combination of the WIND/MGBK suite of codes can predict the noise reduction trends measured between separate flow jets with and without chevrons on the core nozzle. Mean flow predictions were validated against Particle Image Velocimetry (PIV), pressure, and temperature data, and noise predictions were validated against acoustic measurements recorded in the NASA Glenn Aeroacoustic Propulsion Lab. Comparisons are also made to results from the CRAFT code. The work presented here is part of an on-going assessment of the WIND/MGBK suite for use in designing the next generation of quiet nozzles for turbofan engines.

  19. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  20. Transport and stability analyses supporting disruption prediction in high beta KSTAR plasmas

    NASA Astrophysics Data System (ADS)

    Ahn, J.-H.; Sabbagh, S. A.; Park, Y. S.; Berkery, J. W.; Jiang, Y.; Riquezes, J.; Lee, H. H.; Terzolo, L.; Scott, S. D.; Wang, Z.; Glasser, A. H.

    2017-10-01

    KSTAR plasmas have reached high stability parameters in dedicated experiments, with normalized beta βN exceeding 4.3 at relatively low plasma internal inductance li (βN/li>6). Transport and stability analyses have begun on these plasmas to best understand a disruption-free path toward the design target of βN = 5 while aiming to maximize the non-inductive fraction of these plasmas. Initial analysis using the TRANSP code indicates that the non-inductive current fraction in these plasmas has exceeded 50 percent. The advent of KSTAR kinetic equilibrium reconstructions now allows more accurate computation of the MHD stability of these plasmas. Attention is placed on code validation of mode stability using the PEST-3 and resistive DCON codes. Initial evaluation of these analyses for disruption prediction is made using the disruption event characterization and forecasting (DECAF) code. The present global mode kinetic stability model in DECAF developed for low aspect ratio plasmas is evaluated to determine modifications required for successful disruption prediction of KSTAR plasmas. Work supported by U.S. DoE under contract DE-SC0016614.

  1. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  2. Implementation of algebraic stress models in a general 3-D Navier-Stokes method (PAB3D)

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    1995-01-01

    A three-dimensional multiblock Navier-Stokes code, PAB3D, which was developed for propulsion integration and general aerodynamic analysis, has been used extensively by NASA Langley and other organizations to perform both internal (exhaust) and external flow analysis of complex aircraft configurations. This code was designed to solve the simplified Reynolds Averaged Navier-Stokes equations. A two-equation k-epsilon turbulence model has been used with considerable success, especially for attached flows. Accurate predicting of transonic shock wave location and pressure recovery in separated flow regions has been more difficult. Two algebraic Reynolds stress models (ASM) have been recently implemented in the code that greatly improved the code's ability to predict these difficult flow conditions. Good agreement with Direct Numerical Simulation (DNS) for a subsonic flat plate was achieved with ASM's developed by Shih, Zhu, and Lumley and Gatski and Speziale. Good predictions were also achieved at subsonic and transonic Mach numbers for shock location and trailing edge boattail pressure recovery on a single-engine afterbody/nozzle model.

  3. Summary of Data from the First AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Levy, David W.; Zickuhr, Tom; Vassberg, John; Agrawal, Shreekant; Wahls, Richard A.; Pirzadeh, Shahyar; Hemsch, Michael J.

    2002-01-01

    The results from the first AIAA CFD Drag Prediction Workshop are summarized. The workshop was designed specifically to assess the state-of-the-art of computational fluid dynamics methods for force and moment prediction. An impartial forum was provided to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify areas needing additional research and development. The subject of the study was the DLR-F4 wing-body configuration, which is representative of transport aircraft designed for transonic flight. Specific test cases were required so that valid comparisons could be made. Optional test cases included constant-C(sub L) drag-rise predictions typically used in airplane design by industry. Results are compared to experimental data from three wind tunnel tests. A total of 18 international participants using 14 different codes submitted data to the workshop. No particular grid type or turbulence model was more accurate, when compared to each other, or to wind tunnel data. Most of the results overpredicted C(sub Lo) and C(sub Do), but induced drag (dC(sub D)/dC(sub L)(exp 2)) agreed fairly well. Drag rise at high Mach number was underpredicted, however, especially at high C(sub L). On average, the drag data were fairly accurate, but the scatter was greater than desired. The results show that well-validated Reynolds-Averaged Navier-Stokes CFD methods are sufficiently accurate to make design decisions based on predicted drag.

  4. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  5. Comparison of High-Fidelity Computational Tools for Wing Design of a Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.

    2017-01-01

    A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.

  6. Computational/Experimental Aeroheating Predictions for X-33. Phase 2; Vehicle

    NASA Technical Reports Server (NTRS)

    Hamilton, H. Harris, II; Weilmuenster, K. James; Horvath, Thomas J.; Berry, Scott A.

    1998-01-01

    Laminar and turbulent heating-rate calculations from an "engineering" code and laminar calculations from a "benchmark" Navier-Stokes code are compared with experimental wind-tunnel data obtained on several candidate configurations for the X-33 Phase 2 flight vehicle. The experimental data were obtained at a Mach number of 6 and a freestream Reynolds number ranging from 1 to 8 x 10(exp 6)/ft. Comparisons are presented along the windward symmetry plane and in a circumferential direction around the body at several axial stations at angles of attack from 20 to 40 deg. The experimental results include both laminar and turbulent flow. For the highest angle of attack some of the measured heating data exhibited a "non-laminar" behavior which caused the heating to increase above the laminar level long before "classical" transition to turbulent flow was observed. This trend was not observed at the lower angles of attack. When the flow was laminar, both codes predicted the heating along the windward symmetry plane reasonably well but under-predicted the heating in the chine region. When the flow was turbulent the LATCH code accurately predicted the measured heating rates. Both codes were used to calculate heating rates over the X-33 vehicle at the peak heating point on the design trajectory and they were found to be in very good agreement over most of the vehicle windward surface.

  7. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  8. A Step Made Toward Designing Microelectromechanical System (MEMS) Structures With High Reliability

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2003-01-01

    The mechanical design of microelectromechanical systems-particularly for micropower generation applications-requires the ability to predict the strength capacity of load-carrying components over the service life of the device. These microdevices, which typically are made of brittle materials such as polysilicon, show wide scatter (stochastic behavior) in strength as well as a different average strength for different sized structures (size effect). These behaviors necessitate either costly and time-consuming trial-and-error designs or, more efficiently, the development of a probabilistic design methodology for MEMS. Over the years, the NASA Glenn Research Center s Life Prediction Branch has developed the CARES/Life probabilistic design methodology to predict the reliability of advanced ceramic components. In this study, done in collaboration with Johns Hopkins University, the ability of the CARES/Life code to predict the reliability of polysilicon microsized structures with stress concentrations is successfully demonstrated.

  9. Advanced Subsonic Technology (AST) 22-Inch Low Noise Research Fan Rig Preliminary Design of ADP-Type Fan 3

    NASA Technical Reports Server (NTRS)

    Jeracki, Robert J. (Technical Monitor); Topol, David A.; Ingram, Clint L.; Larkin, Michael J.; Roche, Charles H.; Thulin, Robert D.

    2004-01-01

    This report presents results of the work completed on the preliminary design of Fan 3 of NASA s 22-inch Fan Low Noise Research project. Fan 3 was intended to build on the experience gained from Fans 1 and 2 by demonstrating noise reduction technology that surpasses 1992 levels by 6 dB. The work was performed as part of NASA s Advanced Subsonic Technology (AST) program. Work on this task was conducted in the areas of CFD code validation, acoustic prediction and validation, rotor parametric studies, and fan exit guide vane (FEGV) studies up to the time when a NASA decision was made to cancel the design, fabrication and testing phases of the work. The scope of the program changed accordingly to concentrate on two subtasks: (1) Rig data analysis and CFD code validation and (2) Fan and FEGV optimization studies. The results of the CFD code validation work showed that this tool predicts 3D flowfield features well from the blade trailing edge to about a chord downstream. The CFD tool loses accuracy as the distance from the trailing edge increases beyond a blade chord. The comparisons of noise predictions to rig test data showed that both the tone noise tool and the broadband noise tool demonstrated reasonable agreement with the data to the degree that these tools can reliably be used for design work. The section on rig airflow and inlet separation analysis describes the method used to determine total fan airflow, shows the good agreement of predicted boundary layer profiles to measured profiles, and shows separation angles of attack ranging from 29.5 to 27deg for the range of airflows tested. The results of the rotor parametric studies were significant in leading to the decision not to pursue a new rotor design for Fan 3 and resulted in recommendations to concentrate efforts on FEGV stator designs. The ensuing parametric study on FEGV designs showed the potential for 8 to 10 EPNdB noise reduction relative to the baseline.

  10. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  11. The One-Dimensional Cryogenic Implosion Campaign on OMEGA: Modeling, Experiments, and a Statistical Approach to Predict and Understand Direct-Drive Implosions

    NASA Astrophysics Data System (ADS)

    Betti, R.

    2017-10-01

    The 1-D campaign on OMEGA is aimed at validating a novel approach to design cryogenic implosion experiments and provide valuable data to improve the accuracy of 1-D physics models. This new design methodology is being tested first on low-convergence, high-adiabat (α 6 to 7) implosions and will subsequently be applied to implosions with increasing convergence up to the level required for a hydro-equivalent demonstration of ignition. This design procedure assumes that the hydrodynamic codes used in implosion designs lack the necessary physics and that measurements of implosion properties are imperfect. It also assumes that while the measurements may have significant systematic errors, the shot-to-shot variations are small and that cryogenic implosion data are reproducible as observed on OMEGA. One of the goals of the 1-D campaign is to find a mapping of the data to the code results and use the mapping relations to design future implosions. In the 1-D campaign, this predictive methodology was used to design eight implosions using a simple two-shock pulse design, leading to pre-shot predictions of yields within 5% and ion temperatures within 4% of the experimental values. These implosions have also produced the highest neutron yield of 1014 in OMEGA cryogenic implosion experiments with an areal density of 100 mg/cm2. Furthermore, the results from this campaign have been used to test the validity of the 1-D physics models used in the radiation-hydrodynamics codes. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DENA0001944 and LLNL under Contract DE-AC52-07NA27344. * In collaboration with J.P. Knauer, V. Gopalaswamy, D. Patel, K.M. Woo, K.S. Anderson, A. Bose, A.R. Christopherson, V.Yu. Glebov, F.J. Marshall, S.P. Regan, P.B. Radha, C. Stoeckl, and E.M. Campbell.

  12. Extension of a nonlinear systems theory to general-frequency unsteady transonic aerodynamic responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.

  13. Validation of Framework Code Approach to a Life Prediction System for Fiber Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Gravett, Phillip

    1997-01-01

    The grant was conducted by the MMC Life Prediction Cooperative, an industry/government collaborative team, Ohio Aerospace Institute (OAI) acted as the prime contractor on behalf of the Cooperative for this grant effort. See Figure I for the organization and responsibilities of team members. The technical effort was conducted during the period August 7, 1995 to June 30, 1996 in cooperation with Erwin Zaretsky, the LERC Program Monitor. Phil Gravett of Pratt & Whitney was the principal technical investigator. Table I documents all meeting-related coordination memos during this period. The effort under this grant was closely coordinated with an existing USAF sponsored program focused on putting into practice a life prediction system for turbine engine components made of metal matrix composites (MMC). The overall architecture of the NMC life prediction system was defined in the USAF sponsored program (prior to this grant). The efforts of this grant were focussed on implementing and tailoring of the life prediction system, the framework code within it and the damage modules within it to meet the specific requirements of the Cooperative. T'he tailoring of the life prediction system provides the basis for pervasive and continued use of this capability by the industry/government cooperative. The outputs of this grant are: 1. Definition of the framework code to analysis modules interfaces, 2. Definition of the interface between the materials database and the finite element model, and 3. Definition of the integration of the framework code into an FEM design tool.

  14. Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling

    NASA Astrophysics Data System (ADS)

    Ickes, Jacob C.

    Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form of improved peak/trough magnitude prediction, better phase prediction of these locations, and a predicted signal with a frequency content more like the flight test data than the CSD code acting alone. Additionally, a tight coupling analysis was performed as a demonstration of the capability and unique aspects of such an analysis. This work shows that away from the center of the flight envelope, the aerodynamic modeling of the CSD code can be replaced with a more accurate set of predictions from a CFD code with an improvement in the aerodynamic results. The better predictions come at substantially increased computational costs between 1,000 and 10,000 processor-hours.

  15. Handbook of Analytical Methods for Textile Composites

    NASA Technical Reports Server (NTRS)

    Cox, Brian N.; Flanagan, Gerry

    1997-01-01

    The purpose of this handbook is to introduce models and computer codes for predicting the properties of textile composites. The handbook includes several models for predicting the stress-strain response all the way to ultimate failure; methods for assessing work of fracture and notch sensitivity; and design rules for avoiding certain critical mechanisms of failure, such as delamination, by proper textile design. The following textiles received some treatment: 2D woven, braided, and knitted/stitched laminates and 3D interlock weaves, and braids.

  16. XPATCH: a high-frequency electromagnetic scattering prediction code using shooting and bouncing rays

    NASA Astrophysics Data System (ADS)

    Hazlett, Michael; Andersh, Dennis J.; Lee, Shung W.; Ling, Hao; Yu, C. L.

    1995-06-01

    This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time domain signatures, and synthetic aperture radar (SAR) images of realistic 3-D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, curved surfaces, or solid geometries. The computer code, XPATCH, based on the shooting and bouncing ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. XPATCH computes the first-bounce physical optics plus the physical theory of diffraction contributions and the multi-bounce ray contributions for complex vehicles with materials. It has been found that the multi-bounce contributions are crucial for many aspect angles of all classes of vehicles. Without the multi-bounce calculations, the radar return is typically 10 to 15 dB too low. Examples of predicted range profiles, SAR imagery, and radar cross sections (RCS) for several different geometries are compared with measured data to demonstrate the quality of the predictions. The comparisons are from the UHF through the Ka frequency ranges. Recent enhancements to XPATCH for MMW applications and target Doppler predictions are also presented.

  17. A CFD analysis of blade row interactions within a high-speed axial compressor

    NASA Astrophysics Data System (ADS)

    Richman, Michael Scott

    Aircraft engine design provides many technical and financial hurdles. In an effort to streamline the design process, save money, and improve reliability and performance, many manufacturers are relying on computational fluid dynamic simulations. An overarching goal of the design process for military aircraft engines is to reduce size and weight while maintaining (or improving) reliability. Designers often turn to the compression system to accomplish this goal. As pressure ratios increase and the number of compression stages decrease, many problems arise, for example stability and high cycle fatigue (HCF) become significant as individual stage loading is increased. CFD simulations have recently been employed to assist in the understanding of the aeroelastic problems. For accurate multistage blade row HCF prediction, it is imperative that advanced three-dimensional blade row unsteady aerodynamic interaction codes be validated with appropriate benchmark data. This research addresses this required validation process for TURBO, an advanced three-dimensional multi-blade row turbomachinery CFD code. The solution/prediction accuracy is characterized, identifying key flow field parameters driving the inlet guide vane (IGV) and stator response to the rotor generated forcing functions. The result is a quantified evaluation of the ability of TURBO to predict not only the fundamental flow field characteristics but the three dimensional blade loading.

  18. Particle Impact Erosion. Volume 4. User’s Manual Erosion Prediction Procedure for Rocket Nozzle Expansion Region

    DTIC Science & Technology

    1983-05-01

    empirical erosion model, with use of the debris-layer model optional. 1.1 INTERFACE WITH ISPP ISPP is a collection of computer codes designed to calculate...expansion with the ODK code, 4. A two-dimensional, two-phase nozzle expansion with the TD2P code, 5. A turbulent boundary layer solution along the...INPUT THERMODYNAMIC DATA FOR TEMPERATURESBELOW 300°K OIF NEEDED) NO A• 11 READ SSP NAMELIST (ODE. BAL. ODK . TD2P. TEL. NOZZLE GEOMETRY) PROfLM 2

  19. Airfoil Vibration Dampers program

    NASA Technical Reports Server (NTRS)

    Cook, Robert M.

    1991-01-01

    The Airfoil Vibration Damper program has consisted of an analysis phase and a testing phase. During the analysis phase, a state-of-the-art computer code was developed, which can be used to guide designers in the placement and sizing of friction dampers. The use of this computer code was demonstrated by performing representative analyses on turbine blades from the High Pressure Oxidizer Turbopump (HPOTP) and High Pressure Fuel Turbopump (HPFTP) of the Space Shuttle Main Engine (SSME). The testing phase of the program consisted of performing friction damping tests on two different cantilever beams. Data from these tests provided an empirical check on the accuracy of the computer code developed in the analysis phase. Results of the analysis and testing showed that the computer code can accurately predict the performance of friction dampers. In addition, a valuable set of friction damping data was generated, which can be used to aid in the design of friction dampers, as well as provide benchmark test cases for future code developers.

  20. Electromagnetic code for naval applications

    NASA Astrophysics Data System (ADS)

    Crescimbeni, F.; Bessi, F.; Chiti, S.

    1988-12-01

    The use of an increasing number of electronic apparatus became vital to meet the high performance required for military Navy applications. Thus the number of antennas to be mounted on shipboard greatly increased. As a consequence of the high antenna density, of the complexity of the shipboard environment and of the powers used for communication and radar systems, the EMC (Electro-Magnetic Compatibility) problem is playing a leading role in the design of the topside of a ship. The Italian Navy has acquired a numerical code for the antenna siting and design. This code, together with experimental data measured at the Italian Navy test range facility, allows for the evaluation of optimal sitings for antenna systems on shipboard, and the prediction of their performances in the actual environment. The structure of this code, named Programma Elettromagnetico per Applicazioni Navali, (Electromagnetic Code for Naval Applications) is discussed, together with its capabilities and applications. Also the results obtained in some examples are presented and compared with the measurements.

  1. Measured and Predicted Radiation-Induced Currents in Semirigid Coaxial Cables.

    DTIC Science & Technology

    1977-10-11

    plasma focus device. Semirigid cables of different size, material, and impedance were tested. Minute gaps and conductor flashings were found to be dominant factors affecting cable response. Response predictions provided by the MCCABE computer code closely correlated with the experimental measurements. Design of low-response semirigid cables matching the metal and dielectric electron emission is discussed.

  2. Parametric Investigation of a High-Lift Airfoil at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Lin, John C.; Dominik, Chet J.

    1997-01-01

    A new two-dimensional, three-element, advanced high-lift research airfoil has been tested in the NASA Langley Research Center s Low-Turbulence Pressure Tunnel at a chord Reynolds number up to 1.6 x 107. The components of this high-lift airfoil have been designed using a incompressible computational code (INS2D). The design was to provide high maximum-lift values while maintaining attached flow on the single-segment flap at landing conditions. The performance of the new NASA research airfoil is compared to a similar reference high-lift airfoil. On the new high-lift airfoil the effects of Reynolds number on slat and flap rigging have been studied experimentally, as well as the Mach number effects. The performance trend of the high-lift design is comparable to that predicted by INS2D over much of the angle-of-attack range. However, the code did not accurately predict the airfoil performance or the configuration-based trends near maximum lift where the compressibility effect could play a major role.

  3. Design and Performance Calculations of a Propeller for Very High Altitude Flight. Degree awarded by Case Western Univ.

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle

    1998-01-01

    Reported here is a design study of a propeller for a vehicle capable of subsonic flight in Earth's stratosphere. All propellers presented were required to absorb 63.4 kW (85 hp) at 25.9 km (85,000 ft) while aircraft cruise velocity was maintained at Mach 0.40. To produce the final design, classic momentum and blade-element theories were combined with two and three-dimensional results from the Advanced Ducted Propfan Analysis Code (ADPAC), a numerical Navier-Stokes analysis code. The Eppler 387 airfoil was used for each of the constant section propeller designs compared. Experimental data from the Langley Low-Turbulence Pressure Tunnel was used in the strip theory design and analysis programs written. The experimental data was also used to validate ADPAC at a Reynolds numbers of 60,000 and a Mach number of 0.20. Experimental and calculated surface pressure coefficients are compared for a range of angles of attack. Since low Reynolds number transonic experimental data was unavailable, ADPAC was used to generate two-dimensional section performance predictions for Reynolds numbers of 60,000 and 100,000 and Mach numbers ranging from 0.45 to 0.75. Surface pressure coefficients are presented for selected angles of attack. in addition to the variation of lift and drag coefficients at each flow condition. A three-dimensional model of the final design was made which ADPAC used to calculated propeller performance. ADPAC performance predictions were compared with strip-theory calculations at design point. Propeller efficiency predicted by ADPAC was within 1.5% of that calculated by strip theory methods, although ADPAC predictions of thrust, power, and torque coefficients were approximately 5% lower than the strip theory results. Simplifying assumptions made in the strip theory account for the differences seen.

  4. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  5. A Computational Model for Predicting Gas Breakdown

    NASA Astrophysics Data System (ADS)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  6. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  7. Wind turbine design codes: A comparison of the structural response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buhl, M.L. Jr.; Wright, A.D.; Pierce, K.G.

    2000-03-01

    The National Wind Technology Center (NWTC) of the National Renewable Energy Laboratory is continuing a comparison of several computer codes used in the design and analysis of wind turbines. The second part of this comparison determined how well the programs predict the structural response of wind turbines. In this paper, the authors compare the structural response for four programs: ADAMS, BLADED, FAST{_}AD, and YawDyn. ADAMS is a commercial, multibody-dynamics code from Mechanical Dynamics, Inc. BLADED is a commercial, performance and structural-response code from Garrad Hassan and Partners Limited. FAST{_}AD is a structural-response code developed by Oregon State University and themore » University of Utah for the NWTC. YawDyn is a structural-response code developed by the University of Utah for the NWTC. ADAMS, FAST{_}AD, and YawDyn use the University of Utah's AeroDyn subroutine package for calculating aerodynamic forces. Although errors were found in all the codes during this study, once they were fixed, the codes agreed surprisingly well for most of the cases and configurations that were evaluated. One unresolved discrepancy between BLADED and the AeroDyn-based codes was when there was blade and/or teeter motion in addition to a large yaw error.« less

  8. Application of a multi-block CFD code to investigate the impact of geometry modeling on centrifugal compressor flow field predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hathaway, M.D.; Wood, J.R.

    1997-10-01

    CFD codes capable of utilizing multi-block grids provide the capability to analyze the complete geometry of centrifugal compressors. Attendant with this increased capability is potentially increased grid setup time and more computational overhead with the resultant increase in wall clock time to obtain a solution. If the increase in difficulty of obtaining a solution significantly improves the solution from that obtained by modeling the features of the tip clearance flow or the typical bluntness of a centrifugal compressor`s trailing edge, then the additional burden is worthwhile. However, if the additional information obtained is of marginal use, then modeling of certainmore » features of the geometry may provide reasonable solutions for designers to make comparative choices when pursuing a new design. In this spirit a sequence of grids were generated to study the relative importance of modeling versus detailed gridding of the tip gap and blunt trailing edge regions of the NASA large low-speed centrifugal compressor for which there is considerable detailed internal laser anemometry data available for comparison. The results indicate: (1) There is no significant difference in predicted tip clearance mass flow rate whether the tip gap is gridded or modeled. (2) Gridding rather than modeling the trailing edge results in better predictions of some flow details downstream of the impeller, but otherwise appears to offer no great benefits. (3) The pitchwise variation of absolute flow angle decreases rapidly up to 8% impeller radius ratio and much more slowly thereafter. Although some improvements in prediction of flow field details are realized as a result of analyzing the actual geometry there is no clear consensus that any of the grids investigated produced superior results in every case when compared to the measurements. However, if a multi-block code is available, it should be used, as it has the propensity for enabling better predictions than a single block code.« less

  9. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  10. Aeroheating Predictions for X-34 Using an Inviscid-Boundary Layer Method

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Kleb, William L.; Alter, Steven J.

    1998-01-01

    Radiative equilibrium surface temperatures and surface heating rates from a combined inviscid-boundary layer method are presented for the X-34 Reusable Launch Vehicle for several points along the hypersonic descent portion of its trajectory. Inviscid, perfect-gas solutions are generated with the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and the Data-Parallel Lower-Upper Relaxation (DPLUR) code. Surface temperatures and heating rates are then computed using the Langley Approximate Three-Dimensional Convective Heating (LATCH) engineering code employing both laminar and turbulent flow models. The combined inviscid-boundary layer method provides accurate predictions of surface temperatures over most of the vehicle and requires much less computational effort than a Navier-Stokes code. This enables the generation of a more thorough aerothermal database which is necessary to design the thermal protection system and specify the vehicle's flight limits.

  11. Preliminary design optimization of joined-wing aircraft

    NASA Technical Reports Server (NTRS)

    Gallman, John W.; Kroo, Ilan M.; Smith, Stephen C.

    1990-01-01

    The joined wing is an innovative aircraft configuration that has a its tail connected to the wing forming a diamond shape in both top and plan view. This geometric arrangement utilizes the tail for both pitch control and as a structural support for the wing. Several researchers have studied this configuration and predicted significant reductions in trimmed drag or structural weight when compared with a conventional T-tail configuration. Kroo et al. compared the cruise drag of joined wings with conventional designs of the same lifting-surface area and structural weight. This study showed an 11 percent reduction in cruise drag for the lifting system of a joined wing. Although this reduction in cruise drag is significant, a complete design study is needed before any economic savings can be claimed for a joined-wing transport. Mission constraints, such as runway length, could increase the wing area and eliminate potential drag savings. Since other design codes do not accurately represent the interaction between structures and aerodynamics for joined wings, we developed a new design code for this study. The aerodynamic and structural analyses in this study are significantly more sophisticated than those used in most conventional design codes. This sophistication was needed to predict the aerodynamic interference between the wing and tail and the stresses in the truss-like structure. This paper describes these analysis methods, discusses some problems encountered when applying the numerical optimizer NPSOL, and compares optimum joined wings with conventional aircraft on the basis of cruise drag, lifting surface weight, and direct operating cost (DOC).

  12. DCT based interpolation filter for motion compensation in HEVC

    NASA Astrophysics Data System (ADS)

    Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin

    2012-10-01

    High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.

  13. Analytical and Experimental Evaluation of the Heat Transfer Distribution over the Surfaces of Turbine Vanes

    NASA Technical Reports Server (NTRS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-01-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  14. Analytical and experimental evaluation of the heat transfer distribution over the surfaces of turbine vanes

    NASA Astrophysics Data System (ADS)

    Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.

    1983-05-01

    Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.

  15. Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.

    2002-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  16. On a High-Fidelity Hierarchical Approach to Buckling Load Calculations

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.

    2001-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  17. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    NASA Astrophysics Data System (ADS)

    Alexander, J. P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M. P.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.

    2014-12-01

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e+ and e- beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10 - 100 μm on a turn-by-turn, bunch-by-bunch basis at e± beam energies of 2 - 5 GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances.

  18. PCR Amplicon Prediction from Multiplex Degenerate Primer and Probe Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S. N.

    2013-08-08

    Assessing primer specificity and predicting both desired and off-target amplification products is an essential step for robust PCR assay design. Code is described to predict potential polymerase chain reaction (PCR) amplicons in a large sequence database such as NCBI nt from either singleplex or a large multiplexed set of primers, allowing degenerate primer and probe bases, with target mismatch annotates amplicons with gene information automatically downloaded from NCBI, and optionally it can predict whether there are also TaqMan/Luminex probe matches within predicted amplicons.

  19. Cold flow testing of the Space Shuttle Main Engine high pressure fuel turbine model

    NASA Technical Reports Server (NTRS)

    Hudson, Susan T.; Gaddis, Stephen W.; Johnson, P. D.; Boynton, James L.

    1991-01-01

    In order to experimentally determine the performance of the Space Shuttle Main Engine (SSME) High Pressure Fuel Turbopump (HPFTP) turbine, a 'cold' air flow turbine test program was established at NASA's Marshall Space Flight Center. As part of this test program, a baseline test of Rocketdyne's HPFTP turbine has been completed. The turbine performance and turbine diagnostics such as airfoil surface static pressure distributions, static pressure drops through the turbine, and exit swirl angles were investigated at the turbine design point, over its operating range, and at extreme off-design points. The data was compared to pretest predictions with good results. The test data has been used to improve meanline prediction codes and is now being used to validate various three-dimensional codes. The data will also be scaled to engine conditions and used to improve the SSME steady-state performance model.

  20. Code Validation Studies of High-Enthalpy Flows

    DTIC Science & Technology

    2006-12-01

    stage of future hypersonic vehicles. The development and design of such vehicles is aided by the use of experimentation and numerical simulation... numerical predictions and experimental measurements. 3. Summary of Previous Work We have studied extensively hypersonic double-cone flows with and in...the experimental measurements and the numerical predictions. When we accounted for that effect in numerical simulations, and also augmented the

  1. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  2. Computational Fluid Dynamics (CFD) Design of a Blended Wing Body (BWB) with Boundary Layer Ingestion (BLI) Nacelles

    NASA Technical Reports Server (NTRS)

    Morehouse, Melissa B.

    2001-01-01

    A study is being conducted to improve the propulsion/airframe integration for the Blended Wing-Body (BWB) configuration with boundary layer ingestion nacelles. TWO unstructured grid flow solvers, USM3D and FUN3D, have been coupled with different design methods and are being used to redesign the aft wing region and the nacelles to reduce drag and flow separation. An initial study comparing analyses from these two flow solvers against data from a wind tunnel test as well as predictions from the OVERFLOW structured grid code for a BWB without nacelles has been completed. Results indicate that the unstructured grid codes are sufficiently accurate for use in design. Results from the BWB design study will be presented.

  3. Validity of the International Classification of Diseases 10th revision code for hospitalisation with hyponatraemia in elderly patients

    PubMed Central

    Gandhi, Sonja; Shariff, Salimah Z; Fleet, Jamie L; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objective To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission. Design Population-based retrospective validation study. Setting Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499). Main outcome measures Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia. Results The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l. Conclusions The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia due to low sensitivity. PMID:23274673

  4. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the user inputs information that relates the fluid transport properties to the temperature.

  5. Benchmarking of Neutron Production of Heavy-Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  6. Benchmarking of Heavy Ion Transport Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence

    Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less

  7. Space Station Freedom electrical performance model

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Green, Robert D.; Kerslake, Thomas W.; Mckissock, David B.; Trudell, Jeffrey J.

    1993-01-01

    The baseline Space Station Freedom electric power system (EPS) employs photovoltaic (PV) arrays and nickel hydrogen (NiH2) batteries to supply power to housekeeping and user electrical loads via a direct current (dc) distribution system. The EPS was originally designed for an operating life of 30 years through orbital replacement of components. As the design and development of the EPS continues, accurate EPS performance predictions are needed to assess design options, operating scenarios, and resource allocations. To meet these needs, NASA Lewis Research Center (LeRC) has, over a 10 year period, developed SPACE (Station Power Analysis for Capability Evaluation), a computer code designed to predict EPS performance. This paper describes SPACE, its functionality, and its capabilities.

  8. A CFD/CSD Interaction Methodology for Aircraft Wings

    NASA Technical Reports Server (NTRS)

    Bhardwaj, Manoj K.

    1997-01-01

    With advanced subsonic transports and military aircraft operating in the transonic regime, it is becoming important to determine the effects of the coupling between aerodynamic loads and elastic forces. Since aeroelastic effects can contribute significantly to the design of these aircraft, there is a strong need in the aerospace industry to predict these aero-structure interactions computationally. To perform static aeroelastic analysis in the transonic regime, high fidelity computational fluid dynamics (CFD) analysis tools must be used in conjunction with high fidelity computational structural fluid dynamics (CSD) analysis tools due to the nonlinear behavior of the aerodynamics in the transonic regime. There is also a need to be able to use a wide variety of CFD and CSD tools to predict these aeroelastic effects in the transonic regime. Because source codes are not always available, it is necessary to couple the CFD and CSD codes without alteration of the source codes. In this study, an aeroelastic coupling procedure is developed which will perform static aeroelastic analysis using any CFD and CSD code with little code integration. The aeroelastic coupling procedure is demonstrated on an F/A-18 Stabilator using NASTD (an in-house McDonnell Douglas CFD code) and NASTRAN. In addition, the Aeroelastic Research Wing (ARW-2) is used for demonstration of the aeroelastic coupling procedure by using ENSAERO (NASA Ames Research Center CFD code) and a finite element wing-box code (developed as part of this research).

  9. The effect of incidence angle on the overall three-dimensional aerodynamic performance of a classical annular airfoil cascade

    NASA Technical Reports Server (NTRS)

    Bergsten, D. E.; Fleeter, S.

    1983-01-01

    To be of quantitative value to the designer and analyst, it is necessary to experimentally verify the flow modeling and the numerics inherent in calculation codes being developed to predict the three dimensional flow through turbomachine blade rows. This experimental verification requires that predicted flow fields be correlated with three dimensional data obtained in experiments which model the fundamental phenomena existing in the flow passages of modern turbomachines. The Purdue Annular Cascade Facility was designed specifically to provide these required three dimensional data. The overall three dimensional aerodynamic performance of an instrumented classical airfoil cascade was determined over a range of incidence angle values. This was accomplished utilizing a fully automated exit flow data acquisition and analysis system. The mean wake data, acquired at two downstream axial locations, were analyzed to determine the effect of incidence angle, the three dimensionality of the cascade exit flow field, and the similarity of the wake profiles. The hub, mean, and tip chordwise airfoil surface static pressure distributions determined at each incidence angle are correlated with predictions from the MERIDL and TSONIC computer codes.

  10. A numerical study on the thermal initiation of a confined explosive in 2-D geometry.

    PubMed

    Aydemir, Erdoğan; Ulas, Abdullah

    2011-02-15

    Insensitive munitions design against thermal stimuli like slow or fast cook-off has become a significant requirement for today's munitions. In order to achieve insensitive munitions characteristics, the response of the energetic material needs to be predicted against heating stimuli. In this study, a 2D numerical code was developed to simulate the slow and fast cook-off heating conditions of confined munitions and to obtain the response of the energetic materials. Computations were performed in order to predict the transient temperature distribution, the ignition time, and the location of ignition in the munitions. These predictions enable the designers to have an idea of when and at which location the energetic material ignites under certain adverse surrounding conditions. In the paper, the development of the code is explained and the numerical results are compared with available experimental and numerical data in the literature. Additionally, a parametric study was performed showing the effect of dimensional scaling of munitions and the heating rate on the ignition characteristics. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. LRFD software for design and actual ultimate capacity of confined rectangular columns.

    DOT National Transportation Integrated Search

    2013-04-01

    The analysis of concrete columns using unconfined concrete models is a well established practice. On the : other hand, prediction of the actual ultimate capacity of confined concrete columns requires specialized nonlinear : analysis. Modern codes and...

  12. Predicting the Reliability of Brittle Material Structures Subjected to Transient Proof Test and Service Loading

    NASA Astrophysics Data System (ADS)

    Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.

    Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  13. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants.

    PubMed

    Vieira, Lucas Maciel; Grativol, Clicia; Thiebaut, Flavia; Carvalho, Thais G; Hardoim, Pablo R; Hemerly, Adriana; Lifschitz, Sergio; Ferreira, Paulo Cavalcanti Gomes; Walter, Maria Emilia M T

    2017-03-04

    Non-coding RNAs (ncRNAs) constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs), which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM). We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane ( Saccharum spp.) and in maize ( Zea mays ). From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  14. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants

    PubMed Central

    Vieira, Lucas Maciel; Grativol, Clicia; Thiebaut, Flavia; Carvalho, Thais G.; Hardoim, Pablo R.; Hemerly, Adriana; Lifschitz, Sergio; Ferreira, Paulo Cavalcanti Gomes; Walter, Maria Emilia M. T.

    2017-01-01

    Non-coding RNAs (ncRNAs) constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs), which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM). We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane (Saccharum spp.) and in maize (Zea mays). From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms. PMID:29657283

  15. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  16. One-Dimensional Modelling of Internal Ballistics

    NASA Astrophysics Data System (ADS)

    Monreal-González, G.; Otón-Martínez, R. A.; Velasco, F. J. S.; García-Cascáles, J. R.; Ramírez-Fernández, F. J.

    2017-10-01

    A one-dimensional model is introduced in this paper for problems of internal ballistics involving solid propellant combustion. First, the work presents the physical approach and equations adopted. Closure relationships accounting for the physical phenomena taking place during combustion (interfacial friction, interfacial heat transfer, combustion) are deeply discussed. Secondly, the numerical method proposed is presented. Finally, numerical results provided by this code (UXGun) are compared with results of experimental tests and with the outcome from a well-known zero-dimensional code. The model provides successful results in firing tests of artillery guns, predicting with good accuracy the maximum pressure in the chamber and muzzle velocity what highlights its capabilities as prediction/design tool for internal ballistics.

  17. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits.

    PubMed

    Ginde, Adit A; Blanc, Phillip G; Lieberman, Rebecca M; Camargo, Carlos A

    2008-04-01

    Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3). We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64%) cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8), often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2) identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2%) true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86-92) for detecting hypoglycemia visits. The proposed algorithm improves on prior strategies to identify hypoglycemia visits in administrative data sets and will enhance the ability to study the epidemiology and design interventions for this important complication of diabetes care.

  18. Intercomparison of Monte Carlo radiation transport codes to model TEPC response in low-energy neutron and gamma-ray fields.

    PubMed

    Ali, F; Waker, A J; Waller, E J

    2014-10-01

    Tissue-equivalent proportional counters (TEPC) can potentially be used as a portable and personal dosemeter in mixed neutron and gamma-ray fields, but what hinders this use is their typically large physical size. To formulate compact TEPC designs, the use of a Monte Carlo transport code is necessary to predict the performance of compact designs in these fields. To perform this modelling, three candidate codes were assessed: MCNPX 2.7.E, FLUKA 2011.2 and PHITS 2.24. In each code, benchmark simulations were performed involving the irradiation of a 5-in. TEPC with monoenergetic neutron fields and a 4-in. wall-less TEPC with monoenergetic gamma-ray fields. The frequency and dose mean lineal energies and dose distributions calculated from each code were compared with experimentally determined data. For the neutron benchmark simulations, PHITS produces data closest to the experimental values and for the gamma-ray benchmark simulations, FLUKA yields data closest to the experimentally determined quantities. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Mapping Nuclear Fallout Using the Weather Research & Forecasting (WRF) Model

    DTIC Science & Technology

    2012-09-01

    relevant modules, originally designed to predict the settling of volcanic ash, such that a stabilized cloud of nuclear particulate is initialized...within the model. This modified code is then executed for various atmospheric test explosions and the results are qualitatively and quantitatively...HYSPLIT Simulation ....................................... 44  Figure 7. WRF Fallout Prediction for Test Shot George, 0.8 R/h at H+1

  20. Analysis of a Neutronic Experiment on a Simulated Mercury Spallation Neutron Target Assembly Bombarded by Giga-Electron-Volt Protons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maekawa, Fujio; Meigo, Shin-ichiro; Kasugai, Yoshimi

    2005-05-15

    A neutronic benchmark experiment on a simulated spallation neutron target assembly was conducted by using the Alternating Gradient Synchrotron at Brookhaven National Laboratory and was analyzed to investigate the prediction capability of Monte Carlo simulation codes used in neutronic designs of spallation neutron sources. The target assembly consisting of a mercury target, a light water moderator, and a lead reflector was bombarded by 1.94-, 12-, and 24-GeV protons, and the fast neutron flux distributions around the target and the spectra of thermal neutrons leaking from the moderator were measured in the experiment. In this study, the Monte Carlo particle transportmore » simulation codes NMTC/JAM, MCNPX, and MCNP-4A with associated cross-section data in JENDL and LA-150 were verified based on benchmark analysis of the experiment. As a result, all the calculations predicted the measured quantities adequately; calculated integral fluxes of fast and thermal neutrons agreed approximately within {+-}40% with the experiments although the overall energy range encompassed more than 12 orders of magnitude. Accordingly, it was concluded that these simulation codes and cross-section data were adequate for neutronics designs of spallation neutron sources.« less

  1. ICF Implosions, Space-Charge Electric Fields, and Their Impact on Mix and Compression

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Chacon, Luis; Simakov, Andrei

    2013-10-01

    The single-fluid, quasi-neutral, radiation hydrodynamics codes, used to design the NIF targets, predict thermonuclear ignition for the conditions that have been achieved experimentally. A logical conclusion is that the physics model used in these codes is missing one, or more, key phenomena. Two key model-experiment inconsistencies on NIF are: 1) a lower implosion velocity than predicted by the design codes, and 2) transport of pusher material deep into the hot spot. We hypothesize that both of these model-experiment inconsistencies may be a result of a large, space-charge, electric field residing on the distinct interfaces in a NIF target. Large space-charge fields have been experimentally observed in Omega experiments. Given our hypothesis, this presentation will: 1) Develop a more complete physics picture of initiation, sustainment, and dissipation of a current-driven plasma sheath / double-layer at the Fuel-Pusher interface of an ablating plastic shell implosion on Omega, 2) Characterize the mix that can result from a double-layer field at the Fuel-Pusher interface, prior to the onset of fluid instabilities, and 3) Quantify the impact of the double-layer induced surface tension at the Fuel-Pusher interface on the peak observed implosion velocity in Omega.

  2. Aerodynamic performances of three fan stator designs operating with rotor having tip speed of 337 meters per second and pressure ratio of 1.54. Relation of analytical code calculations to experimental performance

    NASA Technical Reports Server (NTRS)

    Gelder, T. F.; Schmidt, J. F.; Esgar, G. M.

    1980-01-01

    A hub-to-shroud and a blade-to-blade internal-flow analysis code, both inviscid and basically subsonic, were used to calculate the flow parameters within four stator-blade rows. The produced ratios of maximum suction-surface velocity to trailing-edge velocity correlated well in the midspan region, with the measured total-parameters over the minimum-loss to near stall operating range for all stators and speeds studied. The potential benefits of a blade designed with the aid of these flow analysis codes are illustrated by a proposed redesign of one of the four stators studied. An overall efficiency improvement of 1.6 points above the peak measured for that stator is predicted for the redesign.

  3. Small Engine Technology. Task 4: Advanced Small Turboshaft Compressor (ASTC) Performance and Range Investigation

    NASA Technical Reports Server (NTRS)

    Hansen, Jeff L.; Delaney, Robert A.

    1997-01-01

    This contact had two main objectives involving both numerical and experimental investigations of a small highly loaded two-stage axial compressor designated Advanced Small Turboshaft Compressor (ASTC) winch had a design pressure ratio goal of 5:1 at a flowrate of 10.53 lbm/s. The first objective was to conduct 3-D Navier Stokes multistage analyses of the ASTC using several different flow modelling schemes. The second main objective was to complete a numerical/experimental investigation into stall range enhancement of the ASTC. This compressor was designed wider a cooperative Space Act Agreement and all testing was completed at NASA Lewis Research Center. For the multistage analyses, four different flow model schemes were used, namely: (1) steady-state ADPAC analysis, (2) unsteady ADPAC analysis, (3) steady-state APNASA analysis, and (4) steady state OCOM3D analysis. The results of all the predictions were compared to the experimental data. The steady-state ADPAC and APNASA codes predicted similar overall performance and produced good agreement with data, however the blade row performance and flowfield details were quite different. In general, it can be concluded that the APNASA average-passage code does a better job of predicting the performance and flowfield details of the highly loaded ASTC compressor.

  4. Validity of the International Classification of Diseases 10th revision code for hyperkalaemia in elderly patients at presentation to an emergency department and at hospital admission

    PubMed Central

    Fleet, Jamie L; Shariff, Salimah Z; Gandhi, Sonja; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objectives Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission. Design Population-based validation study. Setting 12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497). Primary outcome Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively). Results The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively. Conclusions Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated. PMID:23274674

  5. Optimization of Variable-Depth Liner Configurations for Increased Broadband Noise Reduction

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; Nark, D. M.; Schiller, N. H.; Born, J. C.

    2016-01-01

    This paper employs three acoustic propagation codes to explore variable-depth liner configurations for the NASA Langley Grazing Flow Impedance Tube (GFIT). The initial study demonstrates that a variable impedance can acceptably be treated as a uniform impedance if the spatial extent over which this variable impedance occurs is less than one-third of a wavelength of the incident sound. A constrained optimization study is used to design a variable-depth liner and to select an optimization metric. It also provides insight regarding how much attenuation can be achieved with variable-depth liners. Another optimization study is used to design a liner with much finer chamber depth resolution for the Mach 0.0 and 0.3 test conditions. Two liners are designed based on spatial rearrangement of chambers from this liner to determine whether the order is critical. Propagation code predictions suggest this is not the case. Both liners are fabricated via additive manufacturing and tested in the GFIT for the Mach 0.0 condition. Predicted and measured attenuations compare favorably across the full frequency range. These results clearly suggest that the chambers can be arranged in any order, thus offering the potential for innovative liner designs to minimize depth and weight.

  6. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  7. A Rocket Engine Design Expert System

    NASA Technical Reports Server (NTRS)

    Davidian, Kenneth J.

    1989-01-01

    The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.

  8. Source Methodology for Turbofan Noise Prediction (SOURCE3D Technical Documentation)

    NASA Technical Reports Server (NTRS)

    Meyer, Harold D.

    1999-01-01

    This report provides the analytical documentation for the SOURCE3D Rotor Wake/Stator Interaction Code. It derives the equations for the rotor scattering coefficients and stator source vector and scattering coefficients that are needed for use in the TFANS (Theoretical Fan Noise Design/Prediction System). SOURCE3D treats the rotor and stator as isolated source elements. TFANS uses this information, along with scattering coefficients for inlet and exit elements, and provides complete noise solutions for turbofan engines. SOURCE3D is composed of a collection of FORTRAN programs that have been obtained by extending the approach of the earlier V072 Rotor Wake/Stator Interaction Code. Similar to V072, it treats the rotor and stator as a collection of blades and vanes having zero thickness and camber contained in an infinite, hardwall annular duct. SOURCE3D adds important features to the V072 capability-a rotor element, swirl flow and vorticity waves, actuator disks for flow turning, and combined rotor/actuator disk and stator/actuator disk elements. These items allow reflections from the rotor, frequency scattering, and mode trapping, thus providing more complete noise predictions than previously. The code has been thoroughly verified through comparison with D.B. Hanson's CUP2D two- dimensional code using a narrow annulus test case.

  9. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 1. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  10. Experimental investigations, modeling, and analyses of high-temperature devices for space applications: Part 2. Final report, June 1996--December 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournier, J.; El-Genk, M.S.; Huang, L.

    1999-01-01

    The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less

  11. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun

    2004-05-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less

  12. Design of Critical Components

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.

    2001-01-01

    Critical component design is based on minimizing product failures that results in loss of life. Potential catastrophic failures are reduced to secondary failures where components removed for cause or operating time in the system. Issues of liability and cost of component removal become of paramount importance. Deterministic design with factors of safety and probabilistic design address but lack the essential characteristics for the design of critical components. In deterministic design and fabrication there are heuristic rules and safety factors developed over time for large sets of structural/material components. These factors did not come without cost. Many designs failed and many rules (codes) have standing committees to oversee their proper usage and enforcement. In probabilistic design, not only are failures a given, the failures are calculated; an element of risk is assumed based on empirical failure data for large classes of component operations. Failure of a class of components can be predicted, yet one can not predict when a specific component will fail. The analogy is to the life insurance industry where very careful statistics are book-kept on classes of individuals. For a specific class, life span can be predicted within statistical limits, yet life-span of a specific element of that class can not be predicted.

  13. Aerodynamic-structural model of offwind yacht sails

    NASA Astrophysics Data System (ADS)

    Mairs, Christopher M.

    An aerodynamic-structural model of offwind yacht sails was created that is useful in predicting sail forces. Two sails were examined experimentally and computationally at several wind angles to explore a variety of flow regimes. The accuracy of the numerical solutions was measured by comparing to experimental results. The two sails examined were a Code 0 and a reaching asymmetric spinnaker. During experiment, balance, wake, and sail shape data were recorded for both sails in various configurations. Two computational steps were used to evaluate the computational model. First, an aerodynamic flow model that includes viscosity effects was used to examine the experimental flying shapes that were recorded. Second, the aerodynamic model was combined with a nonlinear, structural, finite element analysis (FEA) model. The aerodynamic and structural models were used iteratively to predict final flying shapes of offwind sails, starting with the design shapes. The Code 0 has relatively low camber and is used at small angles of attack. It was examined experimentally and computationally at a single angle of attack in two trim configurations, a baseline and overtrimmed setting. Experimentally, the Code 0 was stable and maintained large flow attachment regions. The digitized flying shapes from experiment were examined in the aerodynamic model. Force area predictions matched experimental results well. When the aerodynamic-structural tool was employed, the predictive capability was slightly worse. The reaching asymmetric spinnaker has higher camber and operates at higher angles of attack than the Code 0. Experimentally and computationally, it was examined at two angles of attack. Like the Code 0, at each wind angle, baseline and overtrimmed settings were examined. Experimentally, sail oscillations and large flow detachment regions were encountered. The computational analysis began by examining the experimental flying shapes in the aerodynamic model. In the baseline setting, the computational force predictions were fair at both wind angles examined. Force predictions were much improved in the overtrimmed setting when the sail was highly stalled and more stable. The same trends in force prediction were seen when employing the aerodynamic-structural model. Predictions were good to fair in the baseline setting but improved in the overtrimmed configuration.

  14. Noise of Embedded High Aspect Ratio Nozzles

    NASA Technical Reports Server (NTRS)

    Bridges, James E.

    2011-01-01

    A family of high aspect ratio nozzles were designed to provide a parametric database of canonical embedded propulsion concepts. Nozzle throat geometries with aspect ratios of 2:1, 4:1, and 8:1 were chosen, all with convergent nozzle areas. The transition from the typical round duct to the rectangular nozzle was designed very carefully to produce a flow at the nozzle exit that was uniform and free from swirl. Once the basic rectangular nozzles were designed, external features common to embedded propulsion systems were added: extended lower lip (a.k.a. bevel, aft deck), differing sidewalls, and chevrons. For the latter detailed Reynolds-averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) simulations were made to predict the thrust performance and to optimize parameters such as bevel length, and chevron penetration and azimuthal curvature. Seventeen of these nozzles were fabricated at a scale providing a 2.13 inch diameter equivalent area throat." ! The seventeen nozzles were tested for far-field noise and a few data were presented here on the effect of aspect ratio, bevel length, and chevron count and penetration. The sound field of the 2:1 aspect ratio rectangular jet was very nearly axisymmetric, but the 4:1 and 8:1 were not, the noise on their minor axes being louder than the major axes. Adding bevel length increased the noise of these nozzles, especially on their minor axes, both toward the long and short sides of the beveled nozzle. Chevrons were only added to the 2:1 rectangular jet. Adding 4 chevrons per wide side produced some decrease at aft angles, but increased the high frequency noise at right angles to the jet flow. This trend increased with increasing chevron penetration. Doubling the number of chevrons while maintaining their penetration decreased these effects. Empirical models of the parametric effect of these nozzles were constructed and quantify the trends stated above." Because it is the objective of the Supersonics Project that future design work be done more by physics-based computations and less by experiments, several codes under development were evaluated against these test cases. Preliminary results show that the RANS-based code JeNo predicts the spectral directivity of the low aspect ratio jets well, but has no capability to predict the non-axisymmetry. An effort to address this limitations, used in the RANS-based code of Leib and Goldstein, overpredicted the impact of aspect ratio. The broadband shock noise code RISN, also limited to axisymmetric assumptions, did a good job of predicting the spectral directivity of underexpanded 2:1 cold jet case but was not as successful on high aspect ratio jets, particularly when they are hot. All results are preliminary because the underlying CFD has not been validated yet. An effort using a Large Eddy Simulation code by Stanford University predicted noise that agreed with experiments to within a few dB.

  15. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  16. Benchmarking of neutron production of heavy-ion transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remec, I.; Ronningen, R. M.; Heilbronn, L.

    Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less

  17. With or without you: predictive coding and Bayesian inference in the brain

    PubMed Central

    Aitchison, Laurence; Lengyel, Máté

    2018-01-01

    Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084

  18. Meanline Analysis of Turbines with Choked Flow in the Object-Oriented Turbomachinery Analysis Code

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.

    2016-01-01

    The prediction of turbomachinery performance characteristics is an important part of the conceptual aircraft engine design process. During this phase, the designer must examine the effects of a large number of turbomachinery design parameters to determine their impact on overall engine performance and weight. The lack of detailed design information available in this phase necessitates the use of simpler meanline and streamline methods to determine the turbomachinery geometry characteristics and provide performance estimates prior to more detailed CFD (Computational Fluid Dynamics) analyses. While a number of analysis codes have been developed for this purpose, most are written in outdated software languages and may be difficult or impossible to apply to new, unconventional designs. The Object-Oriented Turbomachinery Analysis Code (OTAC) is currently being developed at NASA Glenn Research Center to provide a flexible meanline and streamline analysis capability in a modern object-oriented language. During the development and validation of OTAC, a limitation was identified in the code's ability to analyze and converge turbines as the flow approached choking. This paper describes a series of changes which can be made to typical OTAC turbine meanline models to enable the assessment of choked flow up to limit load conditions. Results produced with this revised model setup are provided in the form of turbine performance maps and are compared to published maps.

  19. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  20. Empirical predictions of hypervelocity impact damage to the space station

    NASA Technical Reports Server (NTRS)

    Rule, W. K.; Hayashida, K. B.

    1991-01-01

    A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft is described. The spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and the pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on Earth. A module of the program facilitates the creation of the data base of experimental results that are used by the damage prediction modules of the code. The user has the choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall. One prediction module is based on fitting low order polynomials through subsets of the experimental data. Another prediction module fits functions based on nondimensional parameters through the data. The last prediction technique is a unique approach that is based on weighting the experimental data according to the distance from the design point.

  1. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Validity of ICD-9-CM Coding for Identifying Incident Methicillin-Resistant Staphylococcus aureus (MRSA) Infections: Is MRSA Infection Coded as a Chronic Disease?

    PubMed Central

    Schweizer, Marin L.; Eber, Michael R.; Laxminarayan, Ramanan; Furuno, Jon P.; Popovich, Kyle J.; Hota, Bala; Rubin, Michael A.; Perencevich, Eli N.

    2013-01-01

    BACKGROUND AND OBJECTIVE Investigators and medical decision makers frequently rely on administrative databases to assess methicillin-resistant Staphylococcus aureus (MRSA) infection rates and outcomes. The validity of this approach remains unclear. We sought to assess the validity of the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) code for infection with drug-resistant microorganisms (V09) for identifying culture-proven MRSA infection. DESIGN Retrospective cohort study. METHODS All adults admitted to 3 geographically distinct hospitals between January 1, 2001, and December 31, 2007, were assessed for presence of incident MRSA infection, defined as an MRSA-positive clinical culture obtained during the index hospitalization, and presence of the V09 ICD-9-CM code. The k statistic was calculated to measure the agreement between presence of MRSA infection and assignment of the V09 code. Sensitivities, specificities, positive predictive values, and negative predictive values were calculated. RESULTS There were 466,819 patients discharged during the study period. Of the 4,506 discharged patients (1.0%) who had the V09 code assigned, 31% had an incident MRSA infection, 20% had prior history of MRSA colonization or infection but did not have an incident MRSA infection, and 49% had no record of MRSA infection during the index hospitalization or the previous hospitalization. The V09 code identified MRSA infection with a sensitivity of 24% (range, 21%–34%) and positive predictive value of 31% (range, 22%–53%). The agreement between assignment of the V09 code and presence of MRSA infection had a κ coefficient of 0.26 (95% confidence interval, 0.25–0.27). CONCLUSIONS In its current state, the ICD-9-CM code V09 is not an accurate predictor of MRSA infection and should not be used to measure rates of MRSA infection. PMID:21460469

  3. KEWPIE2: A cascade code for the study of dynamical decay of excited nuclei

    NASA Astrophysics Data System (ADS)

    Lü, Hongliang; Marchix, Anthony; Abe, Yasuhisa; Boilley, David

    2016-03-01

    KEWPIE-a cascade code devoted to investigating the dynamical decay of excited nuclei, specially designed for treating very low probability events related to the synthesis of super-heavy nuclei formed in fusion-evaporation reactions-has been improved and rewritten in C++ programming language to become KEWPIE2. The current version of the code comprises various nuclear models concerning the light-particle emission, fission process and statistical properties of excited nuclei. General features of the code, such as the numerical scheme and the main physical ingredients, are described in detail. Some typical calculations having been performed in the present paper clearly show that theoretical predictions are generally in accordance with experimental data. Furthermore, since the values of some input parameters cannot be determined neither theoretically nor experimentally, a sensibility analysis is presented. To this end, we systematically investigate the effects of using different parameter values and reaction models on the final results. As expected, in the case of heavy nuclei, the fission process has the most crucial role to play in theoretical predictions. This work would be essential for numerical modeling of fusion-evaporation reactions.

  4. Forces and moments on a slender, cavitating body

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hailey, C.E.; Clark, E.L.; Buffington, R.J.

    1988-01-01

    Recently a numerical code has been developed at Sandia National Laboratories to predict the pitching moment, normal force, and axial force of a slender, supercavitating shape. The potential flow about the body and cavity is calculated using an axial distribution of source/sink elements. The cavity surface is assumed to be a constant pressure streamline, extending beyond the base of the model. Slender body approximation is used to model the crossflow for small angles of attack. A significant extension of previous work in cavitation flow is the inclusion of laminar and turbulent boundary layer solutions on the body. Predictions with thismore » code, for axial force at zero angle of attack, show good agreement with experiments. There are virtually no published data availble with which to benchmark the pitching moment and normal force predictions. An experiment was designed to measure forces and moments on a supercavitation shape. The primary reason for the test was to obtain much needed data to benchmark the hydrodynamic force and moment predictions. Since the numerical prediction is for super cavitating shapes at very small cavitation numbers, the experiment was designed to be a ventilated cavity test. This paper describes the experimental procedure used to measure the pitching moment, axial and normal forces, and base pressure on a slender body with a ventilated cavity. Limited results are presented for pitching moment and normal force. 5 refs., 7 figs.« less

  5. Modeling of impulsive propellant reorientation

    NASA Technical Reports Server (NTRS)

    Hochstein, John I.; Patag, Alfredo E.; Chato, David J.

    1988-01-01

    The impulsive propellant reorientation process is modeled using the (Energy Calculations for Liquid Propellants in a Space Environment (ECLIPSE) code. A brief description of the process and the computational model is presented. Code validation is documented via comparison to experimentally derived data for small scale tanks. Predictions of reorientation performance are presented for two tanks designed for use in flight experiments and for a proposed full scale OTV tank. A new dimensionless parameter is developed to correlate reorientation performance in geometrically similar tanks. Its success is demonstrated.

  6. Automated and fast building of three-dimensional RNA structures.

    PubMed

    Zhao, Yunjie; Huang, Yangyu; Gong, Zhou; Wang, Yanjie; Man, Jianfen; Xiao, Yi

    2012-01-01

    Building tertiary structures of non-coding RNA is required to understand their functions and design new molecules. Current algorithms of RNA tertiary structure prediction give satisfactory accuracy only for small size and simple topology and many of them need manual manipulation. Here, we present an automated and fast program, 3dRNA, for RNA tertiary structure prediction with reasonable accuracy for RNAs of larger size and complex topology.

  7. Calculation of design load for the MOD-5A 7.3 mW wind turbine system

    NASA Technical Reports Server (NTRS)

    Mirandy, L.; Strain, J. C.

    1995-01-01

    Design loads are presented for the General Electric MOD-SA wind turbine. The MOD-SA system consists of a 400 ft. diameter, upwind, two-bladed, teetered rotor connected to a 7.3 mW variable-speed generator. Fatigue loads are specified in the form of histograms for the 30 year life of the machine, while limit (or maximum) loads have been derived from transient dynamic analysis at critical operating conditions. Loads prediction was accomplished using state of the art aeroelastic analyses developed at General Electric. Features of the primary predictive tool - the Transient Rotor Analysis Code (TRAC) are described in the paper. Key to the load predictions are the following wind models: (1) yearly mean wind distribution; (2) mean wind variations during operation; (3) number of start/shutdown cycles; (4) spatially large gusts; and (5) spatially small gusts (local turbulence). The methods used to develop statistical distributions from load calculations represent an extension of procedures used in past wind programs and are believed to be a significant contribution to Wind Turbine Generator analysis. Test/theory correlations are presented to demonstrate code load predictive capability and to support the wind models used in the analysis. In addition MOD-5A loads are compared with those of existing machines. The MOD-5A design was performed by the General Electric Company, Advanced Energy Program Department, under Contract DEN3-153 with NASA Lewis Research Center and sponsored by the Department of Energy.

  8. Planning, creating and documenting a NASTRAN finite element model of a modern helicopter

    NASA Technical Reports Server (NTRS)

    Gabal, R.; Reed, D.; Ricks, R.; Kesack, W.

    1985-01-01

    Mathematical models based on the finite element method of structural analysis as embodied in the NASTRAN computer code are widely used by the helicopter industry to calculate static internal loads and vibration of airframe structure. The internal loads are routinely used for sizing structural members. The vibration predictions are not yet relied on during design. NASA's Langley Research Center sponsored a program to conduct an application of the finite element method with emphasis on predicting structural vibration. The Army/Boeing CH-47D helicopter was used as the modeling subject. The objective was to engender the needed trust in vibration predictions using these models and establish a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process.

  9. Wind turbine design codes: A preliminary comparison of the aerodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buhl, M.L. Jr.; Wright, A.D.; Tangler, J.L.

    1997-12-01

    The National Wind Technology Center of the National Renewable Energy Laboratory is comparing several computer codes used to design and analyze wind turbines. The first part of this comparison is to determine how well the programs predict the aerodynamic behavior of turbines with no structural degrees of freedom. Without general agreement on the aerodynamics, it is futile to try to compare the structural response due to the aerodynamic input. In this paper, the authors compare the aerodynamic loads for three programs: Garrad Hassan`s BLADED, their own WT-PERF, and the University of Utah`s YawDyn. This report documents a work in progressmore » and compares only two-bladed, downwind turbines.« less

  10. Design of hat-stiffened composite panels loaded in axial compression

    NASA Astrophysics Data System (ADS)

    Paul, T. K.; Sinha, P. K.

    An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.

  11. Aerodynamic shape optimization of Airfoils in 2-D incompressible flow

    NASA Astrophysics Data System (ADS)

    Rangasamy, Srinivethan; Upadhyay, Harshal; Somasekaran, Sandeep; Raghunath, Sreekanth

    2010-11-01

    An optimization framework was developed for maximizing the region of 2-D airfoil immersed in laminar flow with enhanced aerodynamic performance. It uses genetic algorithm over a population of 125, across 1000 generations, to optimize the airfoil. On a stand-alone computer, a run takes about an hour to obtain a converged solution. The airfoil geometry was generated using two Bezier curves; one to represent the thickness and the other the camber of the airfoil. The airfoil profile was generated by adding and subtracting the thickness curve from the camber curve. The coefficient of lift and drag was computed using potential velocity distribution obtained from panel code, and boundary layer transition prediction code was used to predict the location of onset of transition. The objective function of a particular design is evaluated as the weighted-average of aerodynamic characteristics at various angles of attacks. Optimization was carried out for several objective functions and the airfoil designs obtained were analyzed.

  12. Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Ashe, Thomas L.; Otting, William D.

    1993-01-01

    The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.

  13. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  14. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  15. An FPGA Implementation to Detect Selective Cationic Antibacterial Peptides

    PubMed Central

    Polanco González, Carlos; Nuño Maganda, Marco Aurelio; Arias-Estrada, Miguel; del Rio, Gabriel

    2011-01-01

    Exhaustive prediction of physicochemical properties of peptide sequences is used in different areas of biological research. One example is the identification of selective cationic antibacterial peptides (SCAPs), which may be used in the treatment of different diseases. Due to the discrete nature of peptide sequences, the physicochemical properties calculation is considered a high-performance computing problem. A competitive solution for this class of problems is to embed algorithms into dedicated hardware. In the present work we present the adaptation, design and implementation of an algorithm for SCAPs prediction into a Field Programmable Gate Array (FPGA) platform. Four physicochemical properties codes useful in the identification of peptide sequences with potential selective antibacterial activity were implemented into an FPGA board. The speed-up gained in a single-copy implementation was up to 108 times compared with a single Intel processor cycle for cycle. The inherent scalability of our design allows for replication of this code into multiple FPGA cards and consequently improvements in speed are possible. Our results show the first embedded SCAPs prediction solution described and constitutes the grounds to efficiently perform the exhaustive analysis of the sequence-physicochemical properties relationship of peptides. PMID:21738652

  16. Advances and Challenges In Uncertainty Quantification with Application to Climate Prediction, ICF design and Science Stockpile Stewardship

    NASA Astrophysics Data System (ADS)

    Klein, R.; Woodward, C. S.; Johannesson, G.; Domyancic, D.; Covey, C. C.; Lucas, D. D.

    2012-12-01

    Uncertainty Quantification (UQ) is a critical field within 21st century simulation science that resides at the very center of the web of emerging predictive capabilities. The science of UQ holds the promise of giving much greater meaning to the results of complex large-scale simulations, allowing for quantifying and bounding uncertainties. This powerful capability will yield new insights into scientific predictions (e.g. Climate) of great impact on both national and international arenas, allow informed decisions on the design of critical experiments (e.g. ICF capsule design, MFE, NE) in many scientific fields, and assign confidence bounds to scientifically predictable outcomes (e.g. nuclear weapons design). In this talk I will discuss a major new strategic initiative (SI) we have developed at Lawrence Livermore National Laboratory to advance the science of Uncertainty Quantification at LLNL focusing in particular on (a) the research and development of new algorithms and methodologies of UQ as applied to multi-physics multi-scale codes, (b) incorporation of these advancements into a global UQ Pipeline (i.e. a computational superstructure) that will simplify user access to sophisticated tools for UQ studies as well as act as a self-guided, self-adapting UQ engine for UQ studies on extreme computing platforms and (c) use laboratory applications as a test bed for new algorithms and methodologies. The initial SI focus has been on applications for the quantification of uncertainty associated with Climate prediction, but the validated UQ methodologies we have developed are now being fed back into Science Based Stockpile Stewardship (SSS) and ICF UQ efforts. To make advancements in several of these UQ grand challenges, I will focus in talk on the following three research areas in our Strategic Initiative: Error Estimation in multi-physics and multi-scale codes ; Tackling the "Curse of High Dimensionality"; and development of an advanced UQ Computational Pipeline to enable complete UQ workflow and analysis for ensemble runs at the extreme scale (e.g. exascale) with self-guiding adaptation in the UQ Pipeline engine. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  17. Modeling of Flow Blockage in a Liquid Metal-Cooled Reactor Subassembly with a Subchannel Analysis Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo

    The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the descriptionmore » of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.« less

  18. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  19. Development of 3D pseudo pin-by-pin calculation methodology in ANC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Mayhue, L.; Huria, H.

    2012-07-01

    Advanced cores and fuel assembly designs have been developed to improve operational flexibility, economic performance and further enhance safety features of nuclear power plants. The simulation of these new designs, along with strong heterogeneous fuel loading, have brought new challenges to the reactor physics methodologies currently employed in the industrial codes for core analyses. Control rod insertion during normal operation is one operational feature in the AP1000{sup R} plant of Westinghouse next generation Pressurized Water Reactor (PWR) design. This design improves its operational flexibility and efficiency but significantly challenges the conventional reactor physics methods, especially in pin power calculations. Themore » mixture loading of fuel assemblies with significant neutron spectrums causes a strong interaction between different fuel assembly types that is not fully captured with the current core design codes. To overcome the weaknesses of the conventional methods, Westinghouse has developed a state-of-the-art 3D Pin-by-Pin Calculation Methodology (P3C) and successfully implemented in the Westinghouse core design code ANC. The new methodology has been qualified and licensed for pin power prediction. The 3D P3C methodology along with its application and validation will be discussed in the paper. (authors)« less

  20. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  1. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  2. Formulation of aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Supersonic second order potential theory was examined in detail to meet this objective. Shock layer integral techniques were considered as an alternative means of predicting gross aerodynamic characteristics. Several numerical pilot codes were developed for simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the second order computations indicated good agreement with higher order solutions and experimental results for a variety of wing like shapes and values of the hypersonic similarity parameter M delta approaching one.

  3. A numerical simulation of the full two-dimensional electrothermal de-icer pad. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Masiulaniec, Konstanty C.

    1988-01-01

    The ability to predict the time-temperature history of electrothermal de-icer pads is important in the subsequent design of improved and more efficient versions. These de-icer pads are installed near the surface of aircraft components, for the specific purpose of removing accreted ice. The proposed numerical model can incorporate the full 2-D geometry through a section of a region (i.e., section of an airfoil), that current 1-D numerical codes are unable to do. Thus, the effects of irregular layers, curvature, etc., can now be accounted for in the thermal transients. Each layer in the actual geometry is mapped via a body-fitted coordinate transformation into uniform, rectangular computational grids. The relevant heat transfer equations are transformed and discretized. To model the phase change that might occur in any accreted ice, in an enthalpy formulation the phase change equations are likewise transformed and discretized. The code developed was tested against numerous classical numerical solutions, as well as against experimental de-icing data on a UH1H rotor blade obtained from the NASA Lewis Research Center. The excellent comparisons obtained show that this code can be a useful tool in predicting the performance of current de-icer models, as well as in the designing of future models.

  4. A new code for Galileo

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1988-01-01

    Over the past six to eight years, an extensive research effort was conducted to investigate advanced coding techniques which promised to yield more coding gain than is available with current NASA standard codes. The delay in Galileo's launch due to the temporary suspension of the shuttle program provided the Galileo project with an opportunity to evaluate the possibility of including some version of the advanced codes as a mission enhancement option. A study was initiated last summer to determine if substantial coding gain was feasible for Galileo and, is so, to recommend a suitable experimental code for use as a switchable alternative to the current NASA-standard code. The Galileo experimental code study resulted in the selection of a code with constant length 15 and rate 1/4. The code parameters were chosen to optimize performance within cost and risk constraints consistent with retrofitting the new code into the existing Galileo system design and launch schedule. The particular code was recommended after a very limited search among good codes with the chosen parameters. It will theoretically yield about 1.5 dB enhancement under idealizing assumptions relative to the current NASA-standard code at Galileo's desired bit error rates. This ideal predicted gain includes enough cushion to meet the project's target of at least 1 dB enhancement under real, non-ideal conditions.

  5. Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex

    PubMed Central

    Storchi, Riccardo; Zippo, Antonio G.; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E. M.

    2012-01-01

    Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role. PMID:22586452

  6. Predicting spike occurrence and neuronal responsiveness from LFPs in primary somatosensory cortex.

    PubMed

    Storchi, Riccardo; Zippo, Antonio G; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E M

    2012-01-01

    Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neuronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role.

  7. Status of VICTORIA: NRC peer review and recent code applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bixler, N.E.; Schaperow, J.H.

    1997-12-01

    VICTORIA is a mechanistic computer code designed to analyze fission product behavior within a nuclear reactor coolant system (RCS) during a severe accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS. A summary of the results and recommendations of an independent peer review of VICTORIA by the US Nuclear Regulatory Commission (NRC) is presented, along with recent applications of the code. The latter include analyses of a temperature-induced steam generator tube rupture sequence and post-test analyses of the Phebus FPT-1 test. Themore » next planned Phebus test, FTP-4, will focus on fission product releases from a rubble bed, especially those of the less-volatile elements, and on the speciation of the released elements. Pretest analyses using VICTORIA to estimate the magnitude and timing of releases are presented. The predicted release of uranium is a matter of particular importance because of concern about filter plugging during the test.« less

  8. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  9. JDFTx: Software for joint density-functional theory

    DOE PAGES

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...

    2017-11-14

    Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less

  10. JDFTx: Software for joint density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.

    Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less

  11. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  12. Software Design Description for the Tidal Open-boundary Prediction System (TOPS)

    DTIC Science & Technology

    2010-05-04

    Naval Research Laboratory Stennis Space Center, MS 39529-5004 NRL/MR/7320--10-9209 Approved for public release; distribution is unlimited. Software ...Department of Defense, Washington Headquarters Services , Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Software Design

  13. Multiphysics Code Demonstrated for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Melis, Matthew E.

    1998-01-01

    The utility of multidisciplinary analysis tools for aeropropulsion applications is being investigated at the NASA Lewis Research Center. The goal of this project is to apply Spectrum, a multiphysics code developed by Centric Engineering Systems, Inc., to simulate multidisciplinary effects in turbomachinery components. Many engineering problems today involve detailed computer analyses to predict the thermal, aerodynamic, and structural response of a mechanical system as it undergoes service loading. Analysis of aerospace structures generally requires attention in all three disciplinary areas to adequately predict component service behavior, and in many cases, the results from one discipline substantially affect the outcome of the other two. There are numerous computer codes currently available in the engineering community to perform such analyses in each of these disciplines. Many of these codes are developed and used in-house by a given organization, and many are commercially available. However, few, if any, of these codes are designed specifically for multidisciplinary analyses. The Spectrum code has been developed for performing fully coupled fluid, thermal, and structural analyses on a mechanical system with a single simulation that accounts for all simultaneous interactions, thus eliminating the requirement for running a large number of sequential, separate, disciplinary analyses. The Spectrum code has a true multiphysics analysis capability, which improves analysis efficiency as well as accuracy. Centric Engineering, Inc., working with a team of Lewis and AlliedSignal Engines engineers, has been evaluating Spectrum for a variety of propulsion applications including disk quenching, drum cavity flow, aeromechanical simulations, and a centrifugal compressor flow simulation.

  14. Experimental Evaluation of Acoustic Engine Liner Models Developed with COMSOL Multiphysics

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Jones, Michael G.; Bertolucci, Brandon

    2017-01-01

    Accurate modeling tools are needed to design new engine liners capable of reducing aircraft noise. The purpose of this study is to determine if a commercially-available finite element package, COMSOL Multiphysics, can be used to accurately model a range of different acoustic engine liner designs, and in the process, collect and document a benchmark dataset that can be used in both current and future code evaluation activities. To achieve these goals, a variety of liner samples, ranging from conventional perforate-over-honeycomb to extended-reaction designs, were installed in one wall of the grazing flow impedance tube at the NASA Langley Research Center. The liners were exposed to high sound pressure levels and grazing flow, and the effect of the liner on the sound field in the flow duct was measured. These measurements were then compared with predictions. While this report only includes comparisons for a subset of the configurations, the full database of all measurements and predictions is available in electronic format upon request. The results demonstrate that both conventional perforate-over-honeycomb and extended-reaction liners can be accurately modeled using COMSOL. Therefore, this modeling tool can be used with confidence to supplement the current suite of acoustic propagation codes, and ultimately develop new acoustic engine liners designed to reduce aircraft noise.

  15. An overview of aeroelasticity studies for the National Aero-Space Plane

    NASA Technical Reports Server (NTRS)

    Ricketts, Rodney H.; Noll, Thomas E.; Whitlow, Woodrow, Jr.; Huttsell, Lawrence J.

    1993-01-01

    The National Aero-Space Plane (NASP), or X-30, is a single-stage-to-orbit vehicle that is designed to takeoff and land on conventional runways. Research in aeroelasticity was conducted by the NASA and the Wright Laboratory to support the design of a flight vehicle by the national contractor team. This research includes the development of new computational codes for predicting unsteady aerodynamic pressures. In addition, studies were conducted to determine the aerodynamic heating effects on vehicle aeroelasticity and to determine the effects of fuselage flexibility on the stability of the control systems. It also includes the testing of scale models to better understand the aeroelastic behavior of the X-30 and to obtain data for code validation and correlation. This paper presents an overview of the aeroelastic research which has been conducted to support the airframe design.

  16. Assessment of Current Jet Noise Prediction Capabilities

    NASA Technical Reports Server (NTRS)

    Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas

    2008-01-01

    An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.

  17. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    PubMed

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  18. Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input

    PubMed Central

    Hunt, Jonathan J.; Dayan, Peter; Goodhill, Geoffrey J.

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields. PMID:23675290

  19. Aeroheating Design Issues for Reusable Launch Vehicles: A Perspective

    NASA Technical Reports Server (NTRS)

    Zoby, E. Vincent; Thompson, Richard A.; Wurster, Kathryn E.

    2004-01-01

    An overview of basic aeroheating design issues for Reusable Launch Vehicles (RLV), which addresses the application of hypersonic ground-based testing, and computational fluid dynamic (CFD) and engineering codes, is presented. Challenges inherent to the prediction of aeroheating environments required for the successful design of the RLV Thermal Protection System (TPS) are discussed in conjunction with the importance of employing appropriate experimental/computational tools. The impact of the information garnered by using these tools in the resulting analyses, ultimately enhancing the RLV TPS design is illustrated. A wide range of topics is presented in this overview; e.g. the impact of flow physics issues such as boundary-layer transition, including effects of distributed and discrete roughness, shock-shock interactions, and flow separation/reattachment. Also, the benefit of integrating experimental and computational studies to gain an improved understanding of flow phenomena is illustrated. From computational studies, the effect of low-density conditions and of uncertainties in material surface properties on the computed heating rates a r e highlighted as well as the significant role of CFD in improving the Outer Mold Line (OML) definition to reduce aeroheating while maintaining aerodynamic performance. Appropriate selection of the TPS design trajectories and trajectory shaping to mitigate aeroheating levels and loads are discussed. Lastly, an illustration of an aeroheating design process is presented whereby data from hypersonic wind-tunnel tests are integrated with predictions from CFD codes and engineering methods to provide heating environments along an entry trajectory as required for TPS design.

  20. Aeroheating Design Issues for Reusable Launch Vehicles: A Perspective

    NASA Technical Reports Server (NTRS)

    Zoby, E. Vincent; Thompson, Richard A.; Wurster, Kathryn E.

    2004-01-01

    An overview of basic aeroheating design issues for Reusable Launch Vehicles (RLV), which addresses the application of hypersonic ground-based testing, and computational fluid dynamic (CFD) and engineering codes, is presented. Challenges inherent to the prediction of aeroheating environments required for the successful design of the RLV Thermal Protection System (TPS) are discussed in conjunction with the importance of employing appropriate experimental/computational tools. The impact of the information garnered by using these tools in the resulting analyses, ultimately enhancing the RLV TPS design is illustrated. A wide range of topics is presented in this overview; e.g. the impact of flow physics issues such as boundary-layer transition, including effects of distributed and discrete roughness, shockshock interactions, and flow separation/reattachment. Also, the benefit of integrating experimental and computational studies to gain an improved understanding of flow phenomena is illustrated. From computational studies, the effect of low-density conditions and of uncertainties in material surface properties on the computed heating rates are highlighted as well as the significant role of CFD in improving the Outer Mold Line (OML) definition to reduce aeroheating while maintaining aerodynamic performance. Appropriate selection of the TPS design trajectories and trajectory shaping to mitigate aeroheating levels and loads are discussed. Lastly, an illustration of an aeroheating design process is presented whereby data from hypersonic wind-tunnel tests are integrated with predictions from CFD codes and engineering methods to provide heating environments along an entry trajectory as required for TPS design.

  1. Determination of photovoltaic concentrator optical design specifications using performance modeling

    NASA Astrophysics Data System (ADS)

    Kerschen, Kevin A.; Levy, Sheldon L.

    The strategy used to develop an optical design specification for a 500X concentration photovoltaic module to be used with a 28-percent-efficient concentrator photovoltaic cell is reported. The computer modeling code (PVOPTICS) developed for this purpose, a Fresnel lens design strategy, and optical component specification procedures are described. Comparisons are made between the predicted performance and the measured performance of components fabricated to those specifications. An acrylic lens and a reflective secondary optical element have been tested, showing efficiencies exceeding 88 percent.

  2. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    PubMed

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural < phonemic

  3. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  4. Behaviour of Reinforced Concrete Columns of Various Cross-Sections Subjected to Fire

    NASA Astrophysics Data System (ADS)

    Balaji, Aneesha; Muhamed Luquman, K.; Nagarajan, Praveen; Madhavan Pillai, T. M.

    2016-09-01

    Fire resistance is one of the crucial design regulations which are now mandatory in most of the design codes. Therefore, a thorough knowledge of behaviour of structures exposed to fire is required in this aspect. Columns are the most vulnerable structural member to fire as it can be exposed to fire from all sides. However, the data available for fire resistant design for columns are limited. Hence the present work is focused on the effect of cross-sectional shape of column in fire resistance design. The various cross-sections considered are Square, Ell (L), Tee (T), and Plus (`+') shape. Also the effect of size and shape and distribution of steel reinforcement on fire resistance of columns is studied. As the procedure for determining fire resistance is not mentioned in Indian Standard code IS 456 (2000), the simplified method (500 °C isotherm method) recommended in EN 1992-1-2:2004 (E) (Eurocode 2) is adopted. The temperature profiles for various cross-sections are developed using finite element method and these profiles are used to predict fire resistance capability of compression members. The fire resistance based on both numerical and code based methods are evaluated and compared for various types of cross-section.

  5. CFD Simulation of Liquid Rocket Engine Injectors

    NASA Technical Reports Server (NTRS)

    Farmer, Richard; Cheng, Gary; Chen, Yen-Sen; Garcia, Roberto (Technical Monitor)

    2001-01-01

    Detailed design issues associated with liquid rocket engine injectors and combustion chamber operation require CFD methodology which simulates highly three-dimensional, turbulent, vaporizing, and combusting flows. The primary utility of such simulations involves predicting multi-dimensional effects caused by specific injector configurations. SECA, Inc. and Engineering Sciences, Inc. have been developing appropriate computational methodology for NASA/MSFC for the past decade. CFD tools and computers have improved dramatically during this time period; however, the physical submodels used in these analyses must still remain relatively simple in order to produce useful results. Simulations of clustered coaxial and impinger injector elements for hydrogen and hydrocarbon fuels, which account for real fluid properties, is the immediate goal of this research. The spray combustion codes are based on the FDNS CFD code' and are structured to represent homogeneous and heterogeneous spray combustion. The homogeneous spray model treats the flow as a continuum of multi-phase, multicomponent fluids which move without thermal or velocity lags between the phases. Two heterogeneous models were developed: (1) a volume-of-fluid (VOF) model which represents the liquid core of coaxial or impinger jets and their atomization and vaporization, and (2) a Blob model which represents the injected streams as a cloud of droplets the size of the injector orifice which subsequently exhibit particle interaction, vaporization, and combustion. All of these spray models are computationally intensive, but this is unavoidable to accurately account for the complex physics and combustion which is to be predicted, Work is currently in progress to parallelize these codes to improve their computational efficiency. These spray combustion codes were used to simulate the three test cases which are the subject of the 2nd International Workshop on-Rocket Combustion Modeling. Such test cases are considered by these investigators to be very valuable for code validation because combustion kinetics, turbulence models and atomization models based on low pressure experiments of hydrogen air combustion do not adequately verify analytical or CFD submodels which are necessary to simulate rocket engine combustion. We wish to emphasize that the simulations which we prepared for this meeting are meant to test the accuracy of the approximations used in our general purpose spray combustion models, rather than represent a definitive analysis of each of the experiments which were conducted. Our goal is to accurately predict local temperatures and mixture ratios in rocket engines; hence predicting individual experiments is used only for code validation. To replace the conventional JANNAF standard axisymmetric finite-rate (TDK) computer code 2 for performance prediction with CFD cases, such codes must posses two features. Firstly, they must be as easy to use and of comparable run times for conventional performance predictions. Secondly, they must provide more detailed predictions of the flowfields near the injector face. Specifically, they must accurately predict the convective mixing of injected liquid propellants in terms of the injector element configurations.

  6. Requirements for facilities and measurement techniques to support CFD development for hypersonic aircraft

    NASA Technical Reports Server (NTRS)

    Sellers, William L., III; Dwoyer, Douglas L.

    1992-01-01

    The design of a hypersonic aircraft poses unique challenges to the engineering community. Problems with duplicating flight conditions in ground based facilities have made performance predictions risky. Computational fluid dynamics (CFD) has been proposed as an additional means of providing design data. At the present time, CFD codes are being validated based on sparse experimental data and then used to predict performance at flight conditions with generally unknown levels of uncertainty. This paper will discuss the facility and measurement techniques that are required to support CFD development for the design of hypersonic aircraft. Illustrations are given of recent success in combining experimental and direct numerical simulation in CFD model development and validation for hypersonic perfect gas flows.

  7. Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gougar, Hans

    2015-02-01

    The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less

  8. Hyper-X Mach 7 Scramjet Design, Ground Test and Flight Results

    NASA Technical Reports Server (NTRS)

    Ferlemann, Shelly M.; McClinton, Charles R.; Rock, Ken E.; Voland, Randy T.

    2005-01-01

    The successful Mach 7 flight test of the Hyper-X (X-43) research vehicle has provided the major, essential demonstration of the capability of the airframe integrated scramjet engine. This flight was a crucial first step toward realizing the potential for airbreathing hypersonic propulsion for application to space launch vehicles. However, it is not sufficient to have just achieved a successful flight. The more useful knowledge gained from the flight is how well the prediction methods matched the actual test results in order to have confidence that these methods can be applied to the design of other scramjet engines and powered vehicles. The propulsion predictions for the Mach 7 flight test were calculated using the computer code, SRGULL, with input from computational fluid dynamics (CFD) and wind tunnel tests. This paper will discuss the evolution of the Mach 7 Hyper-X engine, ground wind tunnel experiments, propulsion prediction methodology, flight results and validation of design methods.

  9. Computations in turbulent flows and off-design performance predictions for airframe-integrated scramjets

    NASA Technical Reports Server (NTRS)

    Goglia, G. L.; Spiegler, E.

    1977-01-01

    The research activity focused on two main tasks: (1) the further development of the SCRAM program and, in particular, the addition of a procedure for modeling the mechanism of the internal adjustment process of the flow, in response to the imposed thermal load across the combustor and (2) the development of a numerical code for the computation of the variation of concentrations throughout a turbulent field, where finite-rate reactions occur. The code also includes an estimation of the effect of the phenomenon called 'unmixedness'.

  10. Euler Technology Assessment - SPLITFLOW Code Applications for Stability and Control Analysis on an Advanced Fighter Model Employing Innovative Control Concepts

    NASA Technical Reports Server (NTRS)

    Jordan, Keith J.

    1998-01-01

    This report documents results from the NASA-Langley sponsored Euler Technology Assessment Study conducted by Lockheed-Martin Tactical Aircraft Systems (LMTAS). The purpose of the study was to evaluate the ability of the SPLITFLOW code using viscous and inviscid flow models to predict aerodynamic stability and control of an advanced fighter model. The inviscid flow model was found to perform well at incidence angles below approximately 15 deg, but not as well at higher angles of attack. The results using a turbulent, viscous flow model matched the trends of the wind tunnel data, but did not show significant improvement over the Euler solutions. Overall, the predictions were found to be useful for stability and control design purposes.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, Amy B.; Zyvoloski, George Anthony; Weaver, Douglas James

    The simulation work presented in this report supports DOE-NE Used Fuel Disposition Campaign (UFDC) goals related to the development of drift scale in-situ field testing of heat-generating nuclear waste (HGNW) in salt formations. Numerical code verification and validation is an important part of the lead-up to field testing, allowing exploration of potential heater emplacement designs, monitoring locations, and perhaps most importantly the ability to predict heat and mass transfer around an evolving test. Such predictions are crucial for the design and location of sampling and monitoring that can be used to validate our understanding of a drift scale test thatmore » is likely to span several years.« less

  12. Aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Potential theory was examined in detail to meet this objective. Numerical pilot codes were developed for relatively simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body, and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one.

  13. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  14. Parametric Model of an Aerospike Rocket Engine

    NASA Technical Reports Server (NTRS)

    Korte, J. J.

    2000-01-01

    A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.

  15. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  16. Code for Multiblock CFD and Heat-Transfer Computations

    NASA Technical Reports Server (NTRS)

    Fabian, John C.; Heidmann, James D.; Lucci, Barbara L.; Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur

    2006-01-01

    The NASA Glenn Research Center General Multi-Block Navier-Stokes Convective Heat Transfer Code, Glenn-HT, has been used extensively to predict heat transfer and fluid flow for a variety of steady gas turbine engine problems. Recently, the Glenn-HT code has been completely rewritten in Fortran 90/95, a more object-oriented language that allows programmers to create code that is more modular and makes more efficient use of data structures. The new implementation takes full advantage of the capabilities of the Fortran 90/95 programming language. As a result, the Glenn-HT code now provides dynamic memory allocation, modular design, and unsteady flow capability. This allows for the heat-transfer analysis of a full turbine stage. The code has been demonstrated for an unsteady inflow condition, and gridding efforts have been initiated for a full turbine stage unsteady calculation. This analysis will be the first to simultaneously include the effects of rotation, blade interaction, film cooling, and tip clearance with recessed tip on turbine heat transfer and cooling performance. Future plans call for the application of the new Glenn-HT code to a range of gas turbine engine problems of current interest to the heat-transfer community. The new unsteady flow capability will allow researchers to predict the effect of unsteady flow phenomena upon the convective heat transfer of turbine blades and vanes. Work will also continue on the development of conjugate heat-transfer capability in the code, where simultaneous solution of convective and conductive heat-transfer domains is accomplished. Finally, advanced turbulence and fluid flow models and automatic gridding techniques are being developed that will be applied to the Glenn-HT code and solution process.

  17. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  18. Low-delay predictive audio coding for the HIVITS HDTV codec

    NASA Astrophysics Data System (ADS)

    McParland, A. K.; Gilchrist, N. H. C.

    1995-01-01

    The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.

  19. Overview of Recent Radiation Transport Code Comparisons for Space Applications

    NASA Astrophysics Data System (ADS)

    Townsend, Lawrence

    Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.

  20. Prediction of properties of intraply hybrid composites

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1979-01-01

    Equations based on the mixtures rule are presented for predicting the physical, thermal, hygral, and mechanical properties of unidirectional intraply hybrid composites (UIHC) from the corresponding properties of their constituent composites. Bounds were derived for uniaxial longitudinal strengths, tension, compression, and flexure of UIHC. The equations predict shear and flexural properties which agree with experimental data from UIHC. Use of these equations in a composites mechanics computer code predicted flexural moduli which agree with experimental data from various intraply hybrid angleplied laminates (IHAL). It is indicated, briefly, how these equations can be used in conjunction with composite mechanics and structural analysis during the analysis/design process.

  1. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  2. AFOSR BRI: Co-Design of Hardware/Software for Predicting MAV Aerodynamics

    DTIC Science & Technology

    2016-09-27

    DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER 6. AUTHOR(S) 7...703-588-8494 AFOSR BRI While Moore’s Law theoretically doubles processor performance every 24 months, much of the realizable performance remains...past efforts to develop such CFD codes on accelerated processors showed limited success, our hardware/software co-design approach created malleable

  3. Aeromechanics and Aeroacoustics Predictions of the Boeing-SMART Rotor Using Coupled-CFD/CSD Analyses

    NASA Technical Reports Server (NTRS)

    Bain, Jeremy; Sim, Ben W.; Sankar, Lakshmi; Brentner, Ken

    2010-01-01

    This paper will highlight helicopter aeromechanics and aeroacoustics prediction capabilities developed by Georgia Institute of Technology, the Pennsylvania State University, and Northern Arizona University under the Helicopter Quieting Program (HQP) sponsored by the Tactical Technology Office of the Defense Advanced Research Projects Agency (DARPA). First initiated in 2004, the goal of the HQP was to develop high fidelity, state-of-the-art computational tools for designing advanced helicopter rotors with reduced acoustic perceptibility and enhanced performance. A critical step towards achieving this objective is the development of rotorcraft prediction codes capable of assessing a wide range of helicopter configurations and operations for future rotorcraft designs. This includes novel next-generation rotor systems that incorporate innovative passive and/or active elements to meet future challenging military performance and survivability goals.

  4. Validation of engineering methods for predicting hypersonic vehicle controls forces and moments

    NASA Technical Reports Server (NTRS)

    Maughmer, M.; Straussfogel, D.; Long, L.; Ozoroski, L.

    1991-01-01

    This work examines the ability of the aerodynamic analysis methods contained in an industry standard conceptual design code, the Aerodynamic Preliminary Analysis System (APAS II), to estimate the forces and moments generated through control surface deflections from low subsonic to high hypersonic speeds. Predicted control forces and moments generated by various control effectors are compared with previously published wind-tunnel and flight-test data for three vehicles: the North American X-15, a hypersonic research airplane concept, and the Space Shuttle Orbiter. Qualitative summaries of the results are given for each force and moment coefficient and each control derivative in the various speed ranges. Results show that all predictions of longitudinal stability and control derivatives are acceptable for use at the conceptual design stage.

  5. Recent plant studies using Victoria 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BIXLER,NATHAN E.; GASSER,RONALD D.

    2000-03-08

    VICTORIA 2.0 is a mechanistic computer code designed to analyze fission product behavior within the reactor coolant system (RCS) during a severe nuclear reactor accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS and secondary circuits. These predictions account for the chemical and aerosol processes that affect radionuclide behavior. VICTORIA 2.0 was released in early 1999; a new version VICTORIA 2.1, is now under development. The largest improvements in VICTORIA 2.1 are connected with the thermochemical database, which is being revised andmore » expanded following the recommendations of a peer review. Three risk-significant severe accident sequences have recently been investigated using the VICTORIA 2.0 code. The focus here is on how various chemistry options affect the predictions. Additionally, the VICTORIA predictions are compared with ones made using the MELCOR code. The three sequences are a station blackout in a GE BWR and steam generator tube rupture (SGTR) and pump-seal LOCA sequences in a 3-loop Westinghouse PWR. These sequences cover a range of system pressures, from fully depressurized to full system pressure. The chief results of this study are the fission product fractions that are retained in the core, RCS, secondary, and containment and the fractions that are released into the environment.« less

  6. Open Rotor Aeroacoustic Modeling

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2012-01-01

    Owing to their inherent fuel efficiency, there is renewed interest in developing open rotor propulsion systems that are both efficient and quiet. The major contributor to the overall noise of an open rotor system is the propulsor noise, which is produced as a result of the interaction of the airstream with the counter-rotating blades. As such, robust aeroacoustic prediction methods are an essential ingredient in any approach to designing low-noise open rotor systems. To that end, an effort has been underway at NASA to assess current open rotor noise prediction tools and develop new capabilities. Under this effort, high-fidelity aerodynamic simulations of a benchmark open rotor blade set were carried out and used to make noise predictions via existing NASA open rotor noise prediction codes. The results have been compared with the aerodynamic and acoustic data that were acquired for this benchmark open rotor blade set. The emphasis of this paper is on providing a summary of recent results from a NASA Glenn effort to validate an in-house open noise prediction code called LINPROP which is based on a high-blade-count asymptotic approximation to the Ffowcs-Williams Hawkings Equation. The results suggest that while predicting the absolute levels may be difficult, the noise trends are reasonably well predicted by this approach.

  7. Open Rotor Aeroacoustic Modelling

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2012-01-01

    Owing to their inherent fuel efficiency, there is renewed interest in developing open rotor propulsion systems that are both efficient and quiet. The major contributor to the overall noise of an open rotor system is the propulsor noise, which is produced as a result of the interaction of the airstream with the counter-rotating blades. As such, robust aeroacoustic prediction methods are an essential ingredient in any approach to designing low-noise open rotor systems. To that end, an effort has been underway at NASA to assess current open rotor noise prediction tools and develop new capabilities. Under this effort, high-fidelity aerodynamic simulations of a benchmark open rotor blade set were carried out and used to make noise predictions via existing NASA open rotor noise prediction codes. The results have been compared with the aerodynamic and acoustic data that were acquired for this benchmark open rotor blade set. The emphasis of this paper is on providing a summary of recent results from a NASA Glenn effort to validate an in-house open noise prediction code called LINPROP which is based on a high-blade-count asymptotic approximation to the Ffowcs-Williams Hawkings Equation. The results suggest that while predicting the absolute levels may be difficult, the noise trends are reasonably well predicted by this approach.

  8. Effect of fire-induced damage on the uniaxial strength characteristics of solid timber: A numerical study

    NASA Astrophysics Data System (ADS)

    Hopkin, D. J.; El-Rimawi, J.; Lennon, T.; Silberschmidt, V. V.

    2011-07-01

    The advent of the structural Eurocodes has allowed civil engineers to be more creative in the design of structures exposed to fire. Rather than rely upon regulatory guidance and prescriptive methods engineers are now able to use such codes to design buildings on the basis of credible design fires rather than accepted unrealistic standard-fire time-temperature curves. Through this process safer and more efficient structural designs are achievable. The key development in enabling performance-based fire design is the emergence of validated numerical models capable of predicting the mechanical response of a whole building or sub-assemblies at elevated temperature. In such a way, efficiency savings have been achieved in the design of steel, concrete and composite structures. However, at present, due to a combination of limited fundamental research and restrictions in the UK National Annex to the timber Eurocode, the design of fire-exposed timber structures using numerical modelling techniques is not generally undertaken. The 'fire design' of timber structures is covered in Eurocode 5 part 1.2 (EN 1995-1-2). In this code there is an advanced calculation annex (Annex B) intended to facilitate the implementation of numerical models in the design of fire-exposed timber structures. The properties contained in the code can, at present, only be applied to standard-fire exposure conditions. This is due to existing limitations related to the available thermal properties which are only valid for standard fire exposure. In an attempt to overcome this barrier the authors have proposed a 'modified conductivity model' (MCM) for determining the temperature of timber structural elements during the heating phase of non-standard fires. This is briefly outlined in this paper. In addition, in a further study, the MCM has been implemented in a coupled thermo-mechanical analysis of uniaxially loaded timber elements exposed to non-standard fires. The finite element package DIANA was adopted with plane-strain elements assuming two-dimensional heat flow. The resulting predictions of failure time for given levels of load are discussed and compared with the simplified 'effective cross section' method presented in EN 1995-1-2.

  9. Comparison of analysis and experiment for dynamics of low-contact-ratio spur gears

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Rebbechi, Brian; Zakrajsek, James J.; Townsend, Dennis P.; Lin, Hsiang Hsi

    1991-01-01

    Low-contact-ratio spur gears were tested in NASA gear-noise-rig to study gear dynamics including dynamic load, tooth bending stress, vibration, and noise. The experimental results were compared with a NASA gear dynamics code to validate the code as a design tool for predicting transmission vibration and noise. Analytical predictions and experimental data for gear-tooth dynamic loads and tooth-root bending stress were compared at 28 operating conditions. Strain gage data were used to compute the normal load between meshing teeth and the bending stress at the tooth root for direct comparison with the analysis. The computed and measured waveforms for dynamic load and stress were compared for several test conditions. These are very similar in shape, which means the analysis successfully simulates the physical behavior of the test gears. The predicted peak value of the dynamic load agrees with the measurement results within an average error of 4.9 percent except at low-torque, high-speed conditions. Predictions of peak dynamic root stress are generally within 10 to 15 percent of the measured values.

  10. MLIBlast: A program to empirically predict hypervelocity impact damage to the Space Station

    NASA Technical Reports Server (NTRS)

    Rule, William K.

    1991-01-01

    MLIBlast is described, which consists of a number of DOC PC based MIcrosoft BASIC program modules written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft. The Spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and a pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impact on spacecraft. One module of MLIBlast facilitates creation of the data base of experimental results that is used by the damage prediction modules of the code. The user has a choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall.

  11. Understanding Engineers' Responsibilities: A Prerequisite to Designing Engineering Education : Commentary on "Educating Engineers for the Public Good Through International Internships: Evidence from a Case Study at Universitat Politècnica de València".

    PubMed

    Murphy, Colleen; Gardoni, Paolo

    2017-07-18

    The development of the curriculum for engineering education (course requirements as well as extra-curricular activities like study abroad and internships) should be based on a comprehensive understanding of engineers' responsibilities. The responsibilities that are constitutive of being an engineer include striving to fulfill the standards of excellence set by technical codes; to improve the idealized models that engineers use to predict, for example, the behavior of alternative designs; and to achieve the internal goods such as safety and sustainability as they are reflected in the design codes. Globalization has implications for these responsibilities and, in turn, for engineering education, by, for example, modifying the collection of possible solutions recognized for existing problems. In addition, international internships can play an important role in fostering the requisite moral imagination of engineering students.

  12. Advances and Computational Tools towards Predictable Design in Biological Engineering

    PubMed Central

    2014-01-01

    The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694

  13. Spatial application of WEPS for estimating wind erosion in the Pacific Northwest

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) is used to simulate soil erosion on croplands and was originally designed to run field scale simulations. This research is an extension of the WEPS model to run on multiple fields (grids) covering a larger region. We modified the WEPS source code to allow it...

  14. A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)

    2002-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.

  15. Experimental Results From the Thermal Energy Storage-1 (TES-1) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Jacqmin, David

    1995-01-01

    The Thermal Energy Storage (TES) experiments are designed to provide data to help researchers understand the long-duration microgravity behavior of thermal energy storage fluoride salts that undergo repeated melting and freezing. Such data, which have never been obtained before, have direct application to space-based solar dynamic power systems. These power systems will store solar energy in a thermal energy salt, such as lithium fluoride (LiF) or a eutectic of lithium fluoride/calcium difluoride (LiF-CaF2) (which melts at a lower temperature). The energy will be stored as the latent heat of fusion when the salt is melted by absorbing solar thermal energy. The stored energy will then be extracted during the shade portion of the orbit, enabling the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes have been developed to predict the performance of a spacebased solar dynamic power system. However, the analytical predictions must be verified experimentally before the analytical results can be used for future space power design applications. Four TES flight experiments will be used to obtain the needed experimental data. This article focuses on the flight results from the first experiment, TES-1, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code.

  16. A High-Granularity Approach to Modeling Energy Consumption and Savings Potential in the U.S. Residential Building Stock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Building simulations are increasingly used in various applications related to energy efficient buildings. For individual buildings, applications include: design of new buildings, prediction of retrofit savings, ratings, performance path code compliance and qualification for incentives. Beyond individual building applications, larger scale applications (across the stock of buildings at various scales: national, regional and state) include: codes and standards development, utility program design, regional/state planning, and technology assessments. For these sorts of applications, a set of representative buildings are typically simulated to predict performance of the entire population of buildings. Focusing on the U.S. single-family residential building stock, this paper willmore » describe how multiple data sources for building characteristics are combined into a highly-granular database that preserves the important interdependencies of the characteristics. We will present the sampling technique used to generate a representative set of thousands (up to hundreds of thousands) of building models. We will also present results of detailed calibrations against building stock consumption data.« less

  17. Plans for Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Ballmann, Josef; Bhatia, Kumar; Blades, Eric; Boucke, Alexander; Chwalowski, Pawel; Dietz, Guido; Dowell, Earl; Florance, Jennifer P.; Hansen, Thorsten; hide

    2011-01-01

    This paper summarizes the plans for the first Aeroelastic Prediction Workshop. The workshop is designed to assess the state of the art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. Three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. For each case chosen, the wind tunnel testing was conducted using forced oscillation of the model at specified frequencies

  18. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding.

    PubMed

    Travnik, Jaden B; Pilarski, Patrick M

    2017-07-01

    Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

  19. Predicting materials for sustainable energy sources: The key role of density functional theory

    NASA Astrophysics Data System (ADS)

    Galli, Giulia

    Climate change and the related need for sustainable energy sources replacing fossil fuels are pressing societal problems. The development of advanced materials is widely recognized as one of the key elements for new technologies that are required to achieve a sustainable environment and provide clean and adequate energy for our planet. We discuss the key role played by Density Functional Theory, and its implementations in high performance computer codes, in understanding, predicting and designing materials for energy applications.

  20. Comparisons of rational engineering correlations of thermophoretically-augmented particle mass transfer with STAN5-predictions for developing boundary layers

    NASA Technical Reports Server (NTRS)

    Gokoglu, S. A.; Rosner, D. E.

    1984-01-01

    Modification of the code STAN5 to properly include thermophoretic mass transport, and examination of selected test cases developing boundary layers which include variable properties, viscous dissipation, transition to turbulence and transpiration cooling. Under conditions representative of current and projected GT operation, local application of St(M)/St(M),o correlations evidently provides accurate and economical engineering design predictions, especially for suspended particles characterized by Schmidt numbers outside of the heavy vapor range.

  1. Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.

    2013-06-01

    Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  2. CodingQuarry: highly accurate hidden Markov model gene prediction in fungal genomes using RNA-seq transcripts.

    PubMed

    Testa, Alison C; Hane, James K; Ellwood, Simon R; Oliver, Richard P

    2015-03-11

    The impact of gene annotation quality on functional and comparative genomics makes gene prediction an important process, particularly in non-model species, including many fungi. Sets of homologous protein sequences are rarely complete with respect to the fungal species of interest and are often small or unreliable, especially when closely related species have not been sequenced or annotated in detail. In these cases, protein homology-based evidence fails to correctly annotate many genes, or significantly improve ab initio predictions. Generalised hidden Markov models (GHMM) have proven to be invaluable tools in gene annotation and, recently, RNA-seq has emerged as a cost-effective means to significantly improve the quality of automated gene annotation. As these methods do not require sets of homologous proteins, improving gene prediction from these resources is of benefit to fungal researchers. While many pipelines now incorporate RNA-seq data in training GHMMs, there has been relatively little investigation into additionally combining RNA-seq data at the point of prediction, and room for improvement in this area motivates this study. CodingQuarry is a highly accurate, self-training GHMM fungal gene predictor designed to work with assembled, aligned RNA-seq transcripts. RNA-seq data informs annotations both during gene-model training and in prediction. Our approach capitalises on the high quality of fungal transcript assemblies by incorporating predictions made directly from transcript sequences. Correct predictions are made despite transcript assembly problems, including those caused by overlap between the transcripts of adjacent gene loci. Stringent benchmarking against high-confidence annotation subsets showed CodingQuarry predicted 91.3% of Schizosaccharomyces pombe genes and 90.4% of Saccharomyces cerevisiae genes perfectly. These results are 4-5% better than those of AUGUSTUS, the next best performing RNA-seq driven gene predictor tested. Comparisons against whole genome Sc. pombe and S. cerevisiae annotations further substantiate a 4-5% improvement in the number of correctly predicted genes. We demonstrate the success of a novel method of incorporating RNA-seq data into GHMM fungal gene prediction. This shows that a high quality annotation can be achieved without relying on protein homology or a training set of genes. CodingQuarry is freely available ( https://sourceforge.net/projects/codingquarry/ ), and suitable for incorporation into genome annotation pipelines.

  3. Contamination Effects on EUV Optics

    NASA Technical Reports Server (NTRS)

    Tveekrem, J.

    1999-01-01

    During ground-based assembly and upon exposure to the space environment, optical surfaces accumulate both particles and molecular condensibles, inevitably resulting in degradation of optical instrument performance. Currently, this performance degradation (and the resulting end-of-life instrument performance) cannot be predicted with sufficient accuracy using existing software tools. Optical design codes exist to calculate instrument performance, but these codes generally assume uncontaminated optical surfaces. Contamination models exist which predict approximate end-of-life contamination levels, but the optical effects of these contamination levels can not be quantified without detailed information about the optical constants and scattering properties of the contaminant. The problem is particularly pronounced in the extreme ultraviolet (EUV, 300-1,200 A) and far (FUV, 1,200-2,000 A) regimes due to a lack of data and a lack of knowledge of the detailed physical and chemical processes involved. Yet it is in precisely these wavelength regimes that accurate predictions are most important, because EUV/FUV instruments are extremely sensitive to contamination.

  4. Statistical Analysis of CFD Solutions from the Fourth AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.

    2010-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from the U.S., Europe, Asia, and Russia using a variety of grid systems and turbulence models for the June 2009 4th Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was a new subsonic transport model, the Common Research Model, designed using a modern approach for the wing and included a horizontal tail. The fourth workshop focused on the prediction of both absolute and incremental drag levels for wing-body and wing-body-horizontal tail configurations. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with earlier workshops using the statistical framework.

  5. Evaluation of icing drag coefficient correlations applied to iced propeller performance prediction

    NASA Technical Reports Server (NTRS)

    Miller, Thomas L.; Shaw, R. J.; Korkan, K. D.

    1987-01-01

    Evaluation of three empirical icing drag coefficient correlations is accomplished through application to a set of propeller icing data. The various correlations represent the best means currently available for relating drag rise to various flight and atmospheric conditions for both fixed-wing and rotating airfoils, and the work presented here ilustrates and evaluates one such application of the latter case. The origins of each of the correlations are discussed, and their apparent capabilities and limitations are summarized. These correlations have been made to be an integral part of a computer code, ICEPERF, which has been designed to calculate iced propeller performance. Comparison with experimental propeller icing data shows generally good agreement, with the quality of the predicted results seen to be directly related to the radial icing extent of each case. The code's capability to properly predict thrust coefficient, power coefficient, and propeller efficiency is shown to be strongly dependent on the choice of correlation selected, as well as upon proper specificatioon of radial icing extent.

  6. Culture and Healthy Eating: The Role of Independence and Interdependence in the U.S. and Japan

    PubMed Central

    Levine, Cynthia S.; Miyamoto, Yuri; Markus, Hazel Rose; Rigotti, Attilio; Boylan, Jennifer Morozink; Park, Jiyoung; Kitayama, Shinobu; Karasawa, Mayumi; Kawakami, Norito; Coe, Christopher L.; Love, Gayle D.; Ryff, Carol D.

    2016-01-01

    Healthy eating is important for physical health. Using large probability samples of middle-aged adults in the U.S. and Japan, we show that fitting with the culturally normative way of being predicts healthy eating. In the U.S, a culture that prioritizes and emphasizes independence, being independent predicts eating a healthy diet (an index of fish, protein, fruit, vegetables, reverse-coded sugared beverages, and reverse-coded high fat meat consumption; Study 1) and not using food as a way to cope with stress (Study 2a). In Japan, a culture that prioritizes and emphasizes interdependence, being interdependent predicts eating a healthy diet (Studies 1 and 2b). Further, reflecting the types of agency that are prevalent in each context, these relationships are mediated by autonomy in the U.S. and positive relations with others in Japan. These findings highlight the importance of understanding cultural differences in shaping healthy behavior and have implications for designing health-promoting interventions. PMID:27516421

  7. Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice

    NASA Technical Reports Server (NTRS)

    Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.

    2001-01-01

    An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.

  8. Comparison of Code Predictions to Test Measurements for Two Orifice Compensated Hydrostatic Bearings at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Keba, John E.

    1996-01-01

    Rotordynamic coefficients obtained from testing two different hydrostatic bearings are compared to values predicted by two different computer programs. The first set of test data is from a relatively long (L/D=1) orifice compensated hydrostatic bearing tested in water by Texas A&M University (TAMU Bearing No.9). The second bearing is a shorter (L/D=.37) bearing and was tested in a lower viscosity fluid by Rocketdyne Division of Rockwell (Rocketdyne 'Generic' Bearing) at similar rotating speeds and pressures. Computed predictions of bearing rotordynamic coefficients were obtained from the cylindrical seal code 'ICYL', one of the industrial seal codes developed for NASA-LeRC by Mechanical Technology Inc., and from the hydrodynamic bearing code 'HYDROPAD'. The comparison highlights the difference the bearing has on the accuracy of the predictions. The TAMU Bearing No. 9 test data is closely matched by the predictions obtained for the HYDROPAD code (except for added mass terms) whereas significant differences exist between the data from the Rocketdyne 'Generic' bearing the code predictions. The results suggest that some aspects of the fluid behavior in the shorter, higher Reynolds Number 'Generic' bearing may not be modeled accurately in the codes. The ICYL code predictions for flowrate and direct stiffness approximately equal those of HYDROPAD. Significant differences in cross-coupled stiffness and the damping terms were obtained relative to HYDROPAD and both sets of test data. Several observations are included concerning application of the ICYL code.

  9. A probabilistic methodology for radar cross section prediction in conceptual aircraft design

    NASA Astrophysics Data System (ADS)

    Hines, Nathan Robert

    System effectiveness has increasingly become the prime metric for the evaluation of military aircraft. As such, it is the decision maker's/designer's goal to maximize system effectiveness. Industry and government research documents indicate that all future military aircraft will incorporate signature reduction as an attempt to improve system effectiveness and reduce the cost of attrition. Today's operating environments demand low observable aircraft which are able to reliably take out valuable, time critical targets. Thus it is desirable to be able to design vehicles that are balanced for increased effectiveness. Previous studies have shown that shaping of the vehicle is one of the most important contributors to radar cross section, a measure of radar signature, and must be considered from the very beginning of the design process. Radar cross section estimation should be incorporated into conceptual design to develop more capable systems. This research strives to meet these needs by developing a conceptual design tool that predicts radar cross section for parametric geometries. This tool predicts the absolute radar cross section of the vehicle as well as the impact of geometry changes, allowing for the simultaneous tradeoff of the aerodynamic, performance, and cost characteristics of the vehicle with the radar cross section. Furthermore, this tool can be linked to a campaign theater analysis code to demonstrate the changes in system and system of system effectiveness due to changes in aircraft geometry. A general methodology was developed and implemented and sample computer codes applied to prototype the proposed process. Studies utilizing this radar cross section tool were subsequently performed to demonstrate the capabilities of this method and show the impact that various inputs have on the outputs of these models. The F/A-18 aircraft configuration was chosen as a case study vehicle to perform a design space exercise and to investigate the relative impact of shaping parameters on radar cross section. Finally, two unique low observable configurations were analyzed to examine the impact of shaping for stealthiness.

  10. Buckling Imperfection Sensitivity of Axially Compressed Orthotropic Cylinders

    NASA Technical Reports Server (NTRS)

    Schultz, Marc R.; Nemeth, Michael P.

    2010-01-01

    Structural stability is a major consideration in the design of lightweight shell structures. However, the theoretical predictions of geometrically perfect structures often considerably over predict the buckling loads of inherently imperfect real structures. It is reasonably well understood how the shell geometry affects the imperfection sensitivity of axially compressed cylindrical shells; however, the effects of shell anisotropy on the imperfection sensitivity is less well understood. In the present paper, the development of an analytical model for assessing the imperfection sensitivity of axially compressed orthotropic cylinders is discussed. Results from the analytical model for four shell designs are compared with those from a general-purpose finite-element code, and good qualitative agreement is found. Reasons for discrepancies are discussed, and potential design implications of this line of research are discussed.

  11. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    NASA Technical Reports Server (NTRS)

    Jones, Scott

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines.OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial compressors and turbines at design and off-design conditions.

  12. Design of an Object-Oriented Turbomachinery Analysis Code: Initial Results

    NASA Technical Reports Server (NTRS)

    Jones, Scott M.

    2015-01-01

    Performance prediction of turbomachines is a significant part of aircraft propulsion design. In the conceptual design stage, there is an important need to quantify compressor and turbine aerodynamic performance and develop initial geometry parameters at the 2-D level prior to more extensive Computational Fluid Dynamics (CFD) analyses. The Object-oriented Turbomachinery Analysis Code (OTAC) is being developed to perform 2-D meridional flowthrough analysis of turbomachines using an implicit formulation of the governing equations to solve for the conditions at the exit of each blade row. OTAC is designed to perform meanline or streamline calculations; for streamline analyses simple radial equilibrium is used as a governing equation to solve for spanwise property variations. While the goal for OTAC is to allow simulation of physical effects and architectural features unavailable in other existing codes, it must first prove capable of performing calculations for conventional turbomachines. OTAC is being developed using the interpreted language features available in the Numerical Propulsion System Simulation (NPSS) code described by Claus et al (1991). Using the NPSS framework came with several distinct advantages, including access to the pre-existing NPSS thermodynamic property packages and the NPSS Newton-Raphson solver. The remaining objects necessary for OTAC were written in the NPSS framework interpreted language. These new objects form the core of OTAC and are the BladeRow, BladeSegment, TransitionSection, Expander, Reducer, and OTACstart Elements. The BladeRow and BladeSegment consumed the initial bulk of the development effort and required determining the equations applicable to flow through turbomachinery blade rows given specific assumptions about the nature of that flow. Once these objects were completed, OTAC was tested and found to agree with existing solutions from other codes; these tests included various meanline and streamline comparisons of axial compressors and turbines at design and off-design conditions.

  13. Theory of Mind: A Neural Prediction Problem

    PubMed Central

    Koster-Hale, Jorie; Saxe, Rebecca

    2014-01-01

    Predictive coding posits that neural systems make forward-looking predictions about incoming information. Neural signals contain information not about the currently perceived stimulus, but about the difference between the observed and the predicted stimulus. We propose to extend the predictive coding framework from high-level sensory processing to the more abstract domain of theory of mind; that is, to inferences about others’ goals, thoughts, and personalities. We review evidence that, across brain regions, neural responses to depictions of human behavior, from biological motion to trait descriptions, exhibit a key signature of predictive coding: reduced activity to predictable stimuli. We discuss how future experiments could distinguish predictive coding from alternative explanations of this response profile. This framework may provide an important new window on the neural computations underlying theory of mind. PMID:24012000

  14. Modeling emission lag after photoexcitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei

    A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less

  15. Commercial turbofan engine exhaust nozzle flow analyses using PAB3D

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Uenishi, K.; Carlson, John R.; Keith, B. D.

    1992-01-01

    Recent developments of a three-dimensional (PAB3D) code have paved the way for a computational investigation of complex aircraft aerodynamic components. The PAB3D code was developed for solving the simplified Reynolds Averaged Navier-Stokes equations in a three-dimensional multiblock/multizone structured mesh domain. The present analysis was applied to commercial turbofan exhaust flow systems. Solution sensitivity to grid density is presented. Laminar flow solutions were developed for all grids and two-equation k-epsilon solutions were developed for selected grids. Static pressure distributions, mass flow and thrust quantities were calculated for on-design engine operating conditions. Good agreement between predicted surface static pressures and experimental data was observed at different locations. Mass flow was predicted within 0.2 percent of experimental data. Thrust forces were typically within 0.4 percent of experimental data.

  16. Modeling emission lag after photoexcitation

    DOE PAGES

    Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei; ...

    2017-10-28

    A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less

  17. An electron-beam dose deposition experiment: TIGER 1-D simulation code versus thermoluminescent dosimetry

    NASA Astrophysics Data System (ADS)

    Murrill, Steven R.; Tipton, Charles W.; Self, Charles T.

    1991-03-01

    The dose absorbed in an integrated circuit (IC) die exposed to a pulse of low-energy electrons is a strong function of both electron energy and surrounding packaging materials. This report describes an experiment designed to measure how well the Integrated TIGER Series one-dimensional (1-D) electron transport simulation program predicts dose correction factors for a state-of-the-art IC package and package/printed circuit board (PCB) combination. These derived factors are compared with data obtained experimentally using thermoluminescent dosimeters (TLD's) and the FX-45 flash x-ray machine (operated in electron-beam (e-beam) mode). The results of this experiment show that the TIGER 1-D simulation code can be used to accurately predict FX-45 e-beam dose deposition correction factors for reasonably complex IC packaging configurations.

  18. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  19. Visual communication with retinex coding.

    PubMed

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  20. Visual Communication with Retinex Coding

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  1. IPAC-Inlet Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A series of analyses have been developed which permit the calculation of the performance of common inlet designs. The methods presented are useful for determining the inlet weight flows, total pressure recovery, and aerodynamic drag coefficients for given inlet geometric designs. Limited geometric input data is required to use this inlet performance prediction methodology. The analyses presented here may also be used to perform inlet preliminary design studies. The calculated inlet performance parameters may be used in subsequent engine cycle analyses or installed engine performance calculations for existing uninstalled engine data.

  2. NASA transmission research and its probable effects on helicopter transmission design

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.; Coy, J. J.; Townsend, D. P.

    1983-01-01

    Transmissions studied for application to helicopters in addition to the more conventional geared transmissions include hybrid (traction/gear), bearingless planetary, and split torque transmissions. Research is being performed to establish the validity of analysis and computer codes developed to predict the performance, efficiency, life, and reliability of these transmissions. Results of this research should provide the transmission designer with analytical tools to design for minimum weight and noise with maximum life and efficiency. In addition, the advantages and limitations of drive systems as well as the more conventional systems will be defined.

  3. NASA transmission research and its probable effects on helicopter transmission design

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.; Coy, J. J.; Townsend, D. P.

    1984-01-01

    Transmissions studied for application to helicopters in addition to the more conventional geared transmissions include hybrid (traction/gear), bearingless planetary, and split torque transmissions. Research is being performed to establish the validity of analysis and computer codes developed to predict the performance, efficiency, life, and reliability of these transmissions. Results of this research should provide the transmission designer with analytical tools to design for minimum weight and noise with maximum life and efficiency. In addition, the advantages and limitations of drive systems as well as the more conventional systems will be defined.

  4. Prediction of plant lncRNA by ensemble machine learning classifiers.

    PubMed

    Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian

    2018-05-02

    In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.

  5. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  6. Linearized Aeroelastic Solver Applied to the Flutter Prediction of Real Configurations

    NASA Technical Reports Server (NTRS)

    Reddy, Tondapu S.; Bakhle, Milind A.

    2004-01-01

    A fast-running unsteady aerodynamics code, LINFLUX, was previously developed for predicting turbomachinery flutter. This linearized code, based on a frequency domain method, models the effects of steady blade loading through a nonlinear steady flow field. The LINFLUX code, which is 6 to 7 times faster than the corresponding nonlinear time domain code, is suitable for use in the initial design phase. Earlier, this code was verified through application to a research fan, and it was shown that the predictions of work per cycle and flutter compared well with those from a nonlinear time-marching aeroelastic code, TURBO-AE. Now, the LINFLUX code has been applied to real configurations: fans developed under the Energy Efficient Engine (E-cubed) Program and the Quiet Aircraft Technology (QAT) project. The LINFLUX code starts with a steady nonlinear aerodynamic flow field and solves the unsteady linearized Euler equations to calculate the unsteady aerodynamic forces on the turbomachinery blades. First, a steady aerodynamic solution is computed for given operating conditions using the nonlinear unsteady aerodynamic code TURBO-AE. A blade vibration analysis is done to determine the frequencies and mode shapes of the vibrating blades, and an interface code is used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor is used to interpolate the mode shapes from the structural dynamics mesh onto the computational fluid dynamics mesh. Then, LINFLUX is used to calculate the unsteady aerodynamic pressure distribution for a given vibration mode, frequency, and interblade phase angle. Finally, a post-processor uses the unsteady pressures to calculate the generalized aerodynamic forces, eigenvalues, an esponse amplitudes. The eigenvalues determine the flutter frequency and damping. Results of flutter calculations from the LINFLUX code are presented for (1) the E-cubed fan developed under the E-cubed program and (2) the Quiet High Speed Fan (QHSF) developed under the Quiet Aircraft Technology project. The results are compared with those obtained from the TURBO-AE code. A graph of the work done per vibration cycle for the first vibration mode of the E-cubed fan is shown. It can be seen that the LINFLUX results show a very good comparison with TURBO-AE results over the entire range of interblade phase angle. The work done per vibration cycle for the first vibration mode of the QHSF fan is shown. Once again, the LINFLUX results compare very well with the results from the TURBOAE code.

  7. Experimental Verification of a Progressive Damage Model for IM7/5260 Laminates Subjected to Tension-Tension Fatigue

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.

    1995-01-01

    The durability and damage tolerance of laminated composites are critical design considerations for airframe composite structures. Therefore, the ability to model damage initiation and growth and predict the life of laminated composites is necessary to achieve structurally efficient and economical designs. The purpose of this research is to experimentally verify the application of a continuum damage model to predict progressive damage development in a toughened material system. Damage due to monotonic and tension-tension fatigue was documented for IM7/5260 graphite/bismaleimide laminates. Crack density and delamination surface area were used to calculate matrix cracking and delamination internal state variables to predict stiffness loss in unnotched laminates. A damage dependent finite element code predicted the stiffness loss for notched laminates with good agreement to experimental data. It was concluded that the continuum damage model can adequately predict matrix damage progression in notched and unnotched laminates as a function of loading history and laminate stacking sequence.

  8. DRA/NASA/ONERA Collaboration on Icing Research. Part 2; Prediction of Airfoil Ice Accretion

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Gent, R. W.; Guffond, Didier

    1997-01-01

    This report presents results from a joint study by DRA, NASA, and ONERA for the purpose of comparing, improving, and validating the aircraft icing computer codes developed by each agency. These codes are of three kinds: (1) water droplet trajectory prediction, (2) ice accretion modeling, and (3) transient electrothermal deicer analysis. In this joint study, the agencies compared their code predictions with each other and with experimental results. These comparison exercises were published in three technical reports, each with joint authorship. DRA published and had first authorship of Part 1 - Droplet Trajectory Calculations, NASA of Part 2 - Ice Accretion Prediction, and ONERA of Part 3 - Electrothermal Deicer Analysis. The results cover work done during the period from August 1986 to late 1991. As a result, all of the information in this report is dated. Where necessary, current information is provided to show the direction of current research. In this present report on ice accretion, each agency predicted ice shapes on two dimensional airfoils under icing conditions for which experimental ice shapes were available. In general, all three codes did a reasonable job of predicting the measured ice shapes. For any given experimental condition, one of the three codes predicted the general ice features (i.e., shape, impingement limits, mass of ice) somewhat better than did the other two. However, no single code consistently did better than the other two over the full range of conditions examined, which included rime, mixed, and glaze ice conditions. In several of the cases, DRA showed that the user's knowledge of icing can significantly improve the accuracy of the code prediction. Rime ice predictions were reasonably accurate and consistent among the codes, because droplets freeze on impact and the freezing model is simple. Glaze ice predictions were less accurate and less consistent among the codes, because the freezing model is more complex and is critically dependent upon unsubstantiated heat transfer and surface roughness models. Thus, heat transfer prediction methods used in the codes became the subject for a separate study in this report to compare predicted heat transfer coefficients with a limited experimental database of heat transfer coefficients for cylinders with simulated glaze and rime ice shapes. The codes did a good job of predicting heat transfer coefficients near the stagnation region of the ice shapes. But in the region of the ice horns, all three codes predicted heat transfer coefficients considerably higher than the measured values. An important conclusion of this study is that further research is needed to understand the finer detail of of the glaze ice accretion process and to develop improved glaze ice accretion models.

  9. TRAC-PF1/MOD1 pretest predictions of MIST experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B.E.; Steiner, J.L.; Siebe, D.A.

    Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 to provide integral system test data on specific issues and phenomena relevant to post small-break loss-of-coolant accidents (SBLOCAs) in Babcock and Wilcox plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. During Fiscal Year 1986, Los Alamos performed five MIST pretest analyses. The five experiments were chosen on the basis of their potential either to approach the facility limits or to challenge the predictive capability of the TRAC-PF1/MOD1 code. Three SBLOCA tests weremore » examined which included nominal test conditions, throttled auxiliary feedwater and asymmetric steam-generator cooldown, and reduced high-pressure-injection (HPI) capacity, respectively. Also analyzed were two ''feed-and-bleed'' cooling tests with reduced HPI and delayed HPI initiation. Results of the tests showed that the MIST facility limits would not be approached in the five tests considered. Early comparisons with preliminary test data indicate that the TRAC-PF1/MOD1 code is correctly calculating the dominant phenomena occurring in the MIST facility during the tests. Posttest analyses are planned to provide a quantitative assessment of the code's ability to predict MIST transients.« less

  10. Investigation of fatigue assessments accuracy for beam weldments considering material data input and FE-mode type

    NASA Astrophysics Data System (ADS)

    Gorash, Yevgen; Comlekci, Tugrul; MacKenzie, Donald

    2017-05-01

    This study investigates the effects of fatigue material data and finite element types on accuracy of residual life assessments under high cycle fatigue. The bending of cross-beam connections is simulated in ANSYS Workbench for different combinations of structural member shapes made of a typical structural steel. The stress analysis of weldments with specific dimensions and loading applied is implemented using solid and shell elements. The stress results are transferred to the fatigue code nCode DesignLife for the residual life prediction. Considering the effects of mean stress using FKM approach, bending and thickness according to BS 7608:2014, fatigue life is predicted using the Volvo method and stress integration rules from ASME Boiler & Pressure Vessel Code. Three different pairs of S-N curves are considered in this work including generic seam weld curves and curves for the equivalent Japanese steel JIS G3106-SM490B. The S-N curve parameters for the steel are identified using the experimental data available from NIMS fatigue data sheets employing least square method and considering thickness and mean stress corrections. The numerical predictions are compared to the available experimental results indicating the most preferable fatigue data input, range of applicability and FE-model formulation to achieve the best accuracy.

  11. Pressure-distribution measurements on a transonic low-aspect ratio wing

    NASA Technical Reports Server (NTRS)

    Keener, E. R.

    1985-01-01

    Experimental surface pressure distributions and oil flow photographs are presented for a 0.90 m semispan model of NASA/Lockheed Wing C, a generic transonic, supercritical, low aspect ratio, highly 3-dimensional configuration. This wing was tested at the design angle of attack of 5 deg over a Mach number range from 0.25 to 0.96, and a Reynolds number range from 3.4 x 1,000,000 to 10 x 1,000,000. Pressures were measured with both the tunnel floor and ceiling suction slots open for most of the tests but taped closed for some tests to simulate solid walls. A comparison is made with the measured pressures from a small model in high Reynolds number facility and with predicted pressures using two three dimesional, transonic full potential flow wing codes: design code FLO22 (nonconservative) and TWING code (conservative). At the given design condition, a small region of flow separation occurred. At a Mach number of 0.82 the flow was unseparated and the surface flow angles were less than 10 deg, indicating that the boundary layer flow was not 3-D. Evidence indicate that wings that are optimized for mild shock waves and mild pressure recovery gradients generally have small 3-D boundary layer flow at design conditions for unseparated flow.

  12. Field Validation of the Stability Limit of a Multi MW Turbine

    NASA Astrophysics Data System (ADS)

    Kallesøe, Bjarne S.; Kragh, Knud A.

    2016-09-01

    Long slender blades of modern multi-megawatt turbines exhibit a flutter like instability at rotor speeds above a critical rotor speed. Knowing the critical rotor speed is crucial to a safe turbine design. The flutter like instability can only be estimated using geometrically non-linear aeroelastic codes. In this study, the estimated rotor speed stability limit of a 7 MW state of the art wind turbine is validated experimentally. The stability limit is estimated using Siemens Wind Powers in-house aeroelastic code, and the results show that the predicted stability limit is within 5% of the experimentally observed limit.

  13. PACCMIT/PACCMIT-CDS: identifying microRNA targets in 3′ UTRs and coding sequences

    PubMed Central

    Šulc, Miroslav; Marín, Ray M.; Robins, Harlan S.; Vaníček, Jiří

    2015-01-01

    The purpose of the proposed web server, publicly available at http://paccmit.epfl.ch, is to provide a user-friendly interface to two algorithms for predicting messenger RNA (mRNA) molecules regulated by microRNAs: (i) PACCMIT (Prediction of ACcessible and/or Conserved MIcroRNA Targets), which identifies primarily mRNA transcripts targeted in their 3′ untranslated regions (3′ UTRs), and (ii) PACCMIT-CDS, designed to find mRNAs targeted within their coding sequences (CDSs). While PACCMIT belongs among the accurate algorithms for predicting conserved microRNA targets in the 3′ UTRs, the main contribution of the web server is 2-fold: PACCMIT provides an accurate tool for predicting targets also of weakly conserved or non-conserved microRNAs, whereas PACCMIT-CDS addresses the lack of similar portals adapted specifically for targets in CDS. The web server asks the user for microRNAs and mRNAs to be analyzed, accesses the precomputed P-values for all microRNA–mRNA pairs from a database for all mRNAs and microRNAs in a given species, ranks the predicted microRNA–mRNA pairs, evaluates their significance according to the false discovery rate and finally displays the predictions in a tabular form. The results are also available for download in several standard formats. PMID:25948580

  14. Modeling Laboratory Astrophysics Experiments in the High-Energy-Density Regime Using the CRASH Radiation-Hydrodynamics Model

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Trantham, M. R.; Kuranz, C. C.; Keiter, P. A.; Rutter, E. M.; Sweeney, R. M.; Malamud, G.

    2012-10-01

    The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density physics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. CRASH model results have shown good agreement with a experimental results from a variety of applications, including: radiative shock, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL), collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  15. Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Khavaran, Abbas

    2002-01-01

    To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.

  16. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  17. Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.

    1998-03-01

    The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less

  18. Sandia National Laboratories analysis code data base

    NASA Astrophysics Data System (ADS)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  19. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  20. Characterization of Aerodynamic Interactions with the Mars Science Laboratory Reaction Control System Using Computation and Experiment

    NASA Technical Reports Server (NTRS)

    Schoenenberger, Mark; VanNorman, John; Rhode, Matthew; Paulson, John

    2013-01-01

    On August 5 , 2012, the Mars Science Laboratory (MSL) entry capsule successfully entered Mars' atmosphere and landed the Curiosity rover in Gale Crater. The capsule used a reaction control system (RCS) consisting of four pairs of hydrazine thrusters to fly a guided entry. The RCS provided bank control to fly along a flight path commanded by an onboard computer and also damped unwanted rates due to atmospheric disturbances and any dynamic instabilities of the capsule. A preliminary assessment of the MSL's flight data from entry showed that the capsule flew much as predicted. This paper will describe how the MSL aerodynamics team used engineering analyses, computational codes and wind tunnel testing in concert to develop the RCS system and certify it for flight. Over the course of MSL's development, the RCS configuration underwent a number of design iterations to accommodate mechanical constraints, aeroheating concerns and excessive aero/RCS interactions. A brief overview of the MSL RCS configuration design evolution is provided. Then, a brief description is presented of how the computational predictions of RCS jet interactions were validated. The primary work to certify that the RCS interactions were acceptable for flight was centered on validating computational predictions at hypersonic speeds. A comparison of computational fluid dynamics (CFD) predictions to wind tunnel force and moment data gathered in the NASA Langley 31-Inch Mach 10 Tunnel was the lynch pin to validating the CFD codes used to predict aero/RCS interactions. Using the CFD predictions and experimental data, an interaction model was developed for Monte Carlo analyses using 6-degree-of-freedom trajectory simulation. The interaction model used in the flight simulation is presented.

  1. Inter-species prediction of protein phosphorylation in the sbv IMPROVER species translation challenge

    PubMed Central

    Biehl, Michael; Sadowski, Peter; Bhanot, Gyan; Bilal, Erhan; Dayarian, Adel; Meyer, Pablo; Norel, Raquel; Rhrissorrakrai, Kahn; Zeller, Michael D.; Hormoz, Sahand

    2015-01-01

    Motivation: Animal models are widely used in biomedical research for reasons ranging from practical to ethical. An important issue is whether rodent models are predictive of human biology. This has been addressed recently in the framework of a series of challenges designed by the systems biology verification for Industrial Methodology for Process Verification in Research (sbv IMPROVER) initiative. In particular, one of the sub-challenges was devoted to the prediction of protein phosphorylation responses in human bronchial epithelial cells, exposed to a number of different chemical stimuli, given the responses in rat bronchial epithelial cells. Participating teams were asked to make inter-species predictions on the basis of available training examples, comprising transcriptomics and phosphoproteomics data. Results: Here, the two best performing teams present their data-driven approaches and computational methods. In addition, post hoc analyses of the datasets and challenge results were performed by the participants and challenge organizers. The challenge outcome indicates that successful prediction of protein phosphorylation status in human based on rat phosphorylation levels is feasible. However, within the limitations of the computational tools used, the inclusion of gene expression data does not improve the prediction quality. The post hoc analysis of time-specific measurements sheds light on the signaling pathways in both species. Availability and implementation: A detailed description of the dataset, challenge design and outcome is available at www.sbvimprover.com. The code used by team IGB is provided under http://github.com/uci-igb/improver2013. Implementations of the algorithms applied by team AMG are available at http://bhanot.biomaps.rutgers.edu/wiki/AMG-sc2-code.zip. Contact: meikelbiehl@gmail.com PMID:24994890

  2. Contraction design for small low-speed wind tunnels

    NASA Technical Reports Server (NTRS)

    Bell, James H.; Mehta, Rabindra D.

    1988-01-01

    An iterative design procedure was developed for two- or three-dimensional contractions installed on small, low-speed wind tunnels. The procedure consists of first computing the potential flow field and hence the pressure distributions along the walls of a contraction of given size and shape using a three-dimensional numerical panel method. The pressure or velocity distributions are then fed into two-dimensional boundary layer codes to predict the behavior of the boundary layers along the walls. For small, low-speed contractions it is shown that the assumption of a laminar boundary layer originating from stagnation conditions at the contraction entry and remaining laminar throughout passage through the successful designs if justified. This hypothesis was confirmed by comparing the predicted boundary layer data at the contraction exit with measured data in existing wind tunnels. The measured boundary layer momentum thicknesses at the exit of four existing contractions, two of which were 3-D, were found to lie within 10 percent of the predicted values, with the predicted values generally lower. From the contraction wall shapes investigated, the one based on a fifth-order polynomial was selected for installation on a newly designed mixing layer wind tunnel.

  3. Contraction design for small low-speed wind tunnels

    NASA Technical Reports Server (NTRS)

    Bell, James H.; Mehta, Rabindra D.

    1988-01-01

    An iterative design procedure was developed for 2- or 3-dimensional contractions installed on small, low speed wind tunnels. The procedure consists of first computing the potential flow field and hence the pressure distributions along the walls of a contraction of given size and shape using a 3-dimensional numerical panel method. The pressure or velocity distributions are then fed into 2-dimensional boundary layer codes to predict the behavior of the boundary layers along the walls. For small, low speed contractions, it is shown that the assumption of a laminar boundary layer originating from stagnation conditions at the contraction entry and remaining laminar throughout passage through the successful designs is justified. This hypothesis was confirmed by comparing the predicted boundary layer data at the contraction exit with measured data in existing wind tunnels. The measured boundary layer momentum thicknesses at the exit of four existing contractions, two of which were 3-D, were found to lie within 10 percent of the predicted values, with the predicted values generally lower. From the contraction wall shapes investigated, the one based on a 5th order polynomial was selected for newly designed mixing wind tunnel installation.

  4. A study of power cycles using supercritical carbon dioxide as the working fluid

    NASA Astrophysics Data System (ADS)

    Schroder, Andrew Urban

    A real fluid heat engine power cycle analysis code has been developed for analyzing the zero dimensional performance of a general recuperated, recompression, precompression supercritical carbon dioxide power cycle with reheat and a unique shaft configuration. With the proposed shaft configuration, several smaller compressor-turbine pairs could be placed inside of a pressure vessel in order to avoid high speed, high pressure rotating seals. The small compressor-turbine pairs would share some resemblance with a turbocharger assembly. Variation in fluid properties within the heat exchangers is taken into account by discretizing zero dimensional heat exchangers. The cycle analysis code allows for multiple reheat stages, as well as an option for the main compressor to be powered by a dedicated turbine or an electrical motor. Variation in performance with respect to design heat exchanger pressure drops and minimum temperature differences, precompressor pressure ratio, main compressor pressure ratio, recompression mass fraction, main compressor inlet pressure, and low temperature recuperator mass fraction have been explored throughout a range of each design parameter. Turbomachinery isentropic efficiencies are implemented and the sensitivity of the cycle performance and the optimal design parameters is explored. Sensitivity of the cycle performance and optimal design parameters is studied with respect to the minimum heat rejection temperature and the maximum heat addition temperature. A hybrid stochastic and gradient based optimization technique has been used to optimize critical design parameters for maximum engine thermal efficiency. A parallel design exploration mode was also developed in order to rapidly conduct the parameter sweeps in this design space exploration. A cycle thermal efficiency of 49.6% is predicted with a 320K [47°C] minimum temperature and 923K [650°C] maximum temperature. The real fluid heat engine power cycle analysis code was expanded to study a theoretical recuperated Lenoir cycle using supercritical carbon dioxide as the working fluid. The real fluid cycle analysis code was also enhanced to study a combined cycle engine cascade. Two engine cascade configurations were studied. The first consisted of a traditional open loop gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 65.0% using a peak temperature of 1,890K [1,617°C]. The second configuration consisted of a hybrid natural gas powered solid oxide fuel cell and gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 73.1%. Both configurations had a minimum temperature of 306K [33°C]. The hybrid stochastic and gradient based optimization technique was used to optimize all engine design parameters for each engine in the cascade such that the entire engine cascade achieved the maximum thermal efficiency. The parallel design exploration mode was also utilized in order to understand the impact of different design parameters on the overall engine cascade thermal efficiency. Two dimensional conjugate heat transfer (CHT) numerical simulations of a straight, equal height channel heat exchanger using supercritical carbon dioxide were conducted at various Reynolds numbers and channel lengths.

  5. Influence of flowfield and vehicle parameters on engineering aerothermal methods

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Thompson, Richard A.

    1989-01-01

    The reliability and flexibility of three engineering codes used in the aerosphace industry (AEROHEAT, INCHES, and MINIVER) were investigated by comparing the results of these codes with Reentry F flight data and ground-test heat-transfer data for a range of cone angles, and with the predictions obtained using the detailed VSL3D code; the engineering solutions were also compared. In particular, the impact of several vehicle and flow-field parameters on the heat transfer and the capability of the engineering codes to predict these results were determined. It was found that entropy, pressure gradient, nose bluntness, gas chemistry, and angle of attack all affect heating levels. A comparison of the results of the three engineering codes with Reentry F flight data and with the predictions obtained of the VSL3D code showed a very good agreement in the regions of the applicability of the codes. It is emphasized that the parameters used in this study can significantly influence the actual heating levels and the prediction capability of a code.

  6. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  7. Recent MELCOR and VICTORIA Fission Product Research at the NRC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bixler, N.E.; Cole, R.K.; Gauntt, R.O.

    1999-01-21

    The MELCOR and VICTORIA severe accident analysis codes, which were developed at Sandia National Laboratories for the U. S. Nuclear Regulatory Commission, are designed to estimate fission product releases during nuclear reactor accidents in light water reactors. MELCOR is an integrated plant-assessment code that models the key phenomena in adequate detail for risk-assessment purposes. VICTORIA is a more specialized fission- product code that provides detailed modeling of chemical reactions and aerosol processes under the high-temperature conditions encountered in the reactor coolant system during a severe reactor accident. This paper focuses on recent enhancements and assessments of the two codes inmore » the area of fission product chemistry modeling. Recently, a model for iodine chemistry in aqueous pools in the containment building was incorporated into the MELCOR code. The model calculates dissolution of iodine into the pool and releases of organic and inorganic iodine vapors from the pool into the containment atmosphere. The main purpose of this model is to evaluate the effect of long-term revolatilization of dissolved iodine. Inputs to the model include dose rate in the pool, the amount of chloride-containing polymer, such as Hypalon, and the amount of buffering agents in the containment. Model predictions are compared against the Radioiodine Test Facility (RTF) experiments conduced by Atomic Energy of Canada Limited (AECL), specifically International Standard Problem 41. Improvements to VICTORIA's chemical reactions models were implemented as a result of recommendations from a peer review of VICTORIA that was completed last year. Specifically, an option is now included to model aerosols and deposited fission products as three condensed phases in addition to the original option of a single condensed phase. The three-condensed-phase model results in somewhat higher predicted fission product volatilities than does the single-condensed-phase model. Modeling of U02 thermochemistry was also improved, and results in better prediction of vaporization of uranium from fuel, which can react with released fission products to affect their volatility. This model also improves the prediction of fission product release rates from fuel. Finally, recent comparisons of MELCOR and VICTORIA with International Standard Problem 40 (STORM) data are presented. These comparisons focus on predicted therrnophoretic deposition, which is the dominant deposition mechanism. Sensitivity studies were performed with the codes to examine experimental and modeling uncertainties.« less

  8. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  9. Design and analysis of multihypothesis motion-compensated prediction (MHMCP) codec for error-resilient visual communications

    NASA Astrophysics Data System (ADS)

    Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay

    2004-10-01

    A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.

  10. High-Lift Propeller Noise Prediction for a Distributed Electric Propulsion Flight Demonstrator

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Buning, Pieter G.; Jones, William T.; Derlaga, Joseph M.

    2017-01-01

    Over the past several years, the use of electric propulsion technologies within aircraft design has received increased attention. The characteristics of electric propulsion systems open up new areas of the aircraft design space, such as the use of distributed electric propulsion (DEP). In this approach, electric motors are placed in many different locations to achieve increased efficiency through integration of the propulsion system with the airframe. Under a project called Scalable Convergent Electric Propulsion Technology Operations Research (SCEPTOR), NASA is designing a flight demonstrator aircraft that employs many "high-lift propellers" distributed upstream of the wing leading edge and two cruise propellers (one at each wingtip). As the high-lift propellers are operational at low flight speeds (take-off/approach flight conditions), the impact of the DEP configuration on the aircraft noise signature is also an important design consideration. This paper describes efforts toward the development of a mulit-fidelity aerodynamic and acoustic methodology for DEP high-lift propeller aeroacoustic modeling. Specifically, the PAS, OVERFLOW 2, and FUN3D codes are used to predict the aerodynamic performance of a baseline high-lift propeller blade set. Blade surface pressure results from the aerodynamic predictions are then used with PSU-WOPWOP and the F1A module of the NASA second generation Aircraft NOise Prediction Program to predict the isolated high-lift propeller noise source. Comparisons of predictions indicate that general trends related to angle of attack effects at the blade passage frequency are captured well with the various codes. Results for higher harmonics of the blade passage frequency appear consistent for the CFD based methods. Conversely, evidence of the need for a study of the effects of increased azimuthal grid resolution on the PAS based results is indicated and will be pursued in future work. Overall, the results indicate that the computational approach is acceptable for fundamental assessment of low-noise high-lift propeller designs. The extent to which the various approaches may be used in a complementary manner will be further established as measured data becomes available for validation. Ultimately, it is anticipated that this combined approach may be used to provide realistic incident source fields for acoustic shielding/scattering studies on various aircraft configurations.

  11. Numerical Simulation of Complex Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Chernobrovkin, A. A.; Lakshiminarayana, B.

    1999-01-01

    An unsteady, multiblock, Reynolds Averaged Navier Stokes solver based on Runge-Kutta scheme and Pseudo-time step for turbo-machinery applications was developed. The code was validated and assessed against analytical and experimental data. It was used to study a variety of physical mechanisms of unsteady, three-dimensional, turbulent, transitional, and cooling flows in compressors and turbines. Flow over a cylinder has been used to study effects of numerical aspects on accuracy of prediction of wake decay and transition, and to modify K-epsilon models. The following simulations have been performed: (a) Unsteady flow in a compressor cascade: Three low Reynolds number turbulence models have been assessed and data compared with Euler/boundary layer predictions. Major flow features associated with wake induced transition were predicted and studied; (b) Nozzle wake-rotor interaction in a turbine: Results compared to LDV data in design and off-design conditions, and cause and effect of unsteady flow in turbine rotors were analyzed; (c) Flow in the low-pressure turbine: Assessed capability of the code to predict transitional, attached and separated flows at a wide range of low Reynolds numbers and inlet freestream turbulence intensity. Several turbulence and transition models have been employed and comparisons made to experiments; (d) leading edge film cooling at compound angle: Comparisons were made with experiments, and the flow physics of the associated vortical structures were studied; and (e) Tip leakage flow in a turbine. The physics of the secondary flow in a rotor was studied and sources of loss identified.

  12. ANOPP programming and documentation standards document

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Standards defining the requirements for preparing software for the Aircraft Noise Prediction Program (ANOPP) were given. It is the intent of these standards to provide definition, design, coding, and documentation criteria for the achievement of a unity among ANOPP products. These standards apply to all of ANOPP's standard software system. The standards encompass philosophy as well as techniques and conventions.

  13. Object Based Numerical Zooming Between the NPSS Version 1 and a 1-Dimensional Meanline High Pressure Compressor Design Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, G.; Naiman, C.; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.

  14. NASA Lewis steady-state heat pipe code users manual

    NASA Technical Reports Server (NTRS)

    Tower, Leonard K.; Baker, Karl W.; Marks, Timothy S.

    1992-01-01

    The NASA Lewis heat pipe code was developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user.

  15. NASA Lewis steady-state heat pipe code users manual

    NASA Astrophysics Data System (ADS)

    Tower, Leonard K.; Baker, Karl W.; Marks, Timothy S.

    1992-06-01

    The NASA Lewis heat pipe code was developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user.

  16. Experimental Results from the Thermal Energy Storage-1 (TES-1) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Wald, Lawrence W.; Tolbert, Carol; Jacqmin, David

    1995-01-01

    The Thermal Energy Storage-1 (TES-1) is a flight experiment that flew on the Space Shuttle Columbia (STS-62), in March 1994, as part of the OAST-2 mission. TES-1 is the first experiment in a four experiment suite designed to provide data for understanding the long duration microgravity behavior of thermal energy storage fluoride salts that undergo repeated melting and freezing. Such data have never been obtained before and have direct application for the development of space-based solar dynamic (SD) power systems. These power systems will store solar energy in a thermal energy salt such as lithium fluoride or calcium fluoride. The stored energy is extracted during the shade portion of the orbit. This enables the solar dynamic power system to provide constant electrical power over the entire orbit. Analytical computer codes have been developed for predicting performance of a spaced-based solar dynamic power system. Experimental verification of the analytical predictions is needed prior to using the analytical results for future space power design applications. The four TES flight experiments will be used to obtain the needed experimental data. This paper will focus on the flight results from the first experiment, TES-1, in comparison to the predicted results from the Thermal Energy Storage Simulation (TESSIM) analytical computer code. The TES-1 conceptual development, hardware design, final development, and system verification testing were accomplished at the NASA lewis Research Center (LeRC). TES-1 was developed under the In-Space Technology Experiment Program (IN-STEP), which sponsors NASA, industry, and university flight experiments designed to enable and enhance space flight technology. The IN-STEP Program is sponsored by the Office of Space Access and Technology (OSAT).

  17. Inter-view prediction of intra mode decision for high-efficiency video coding-based multiview video coding

    NASA Astrophysics Data System (ADS)

    da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.

    2014-05-01

    Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.

  18. Analysis of Phenix end-of-life natural convection test with the MARS-LMR code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, H. Y.; Ha, K. S.; Lee, K. L.

    The end-of-life test of Phenix reactor performed by the CEA provided an opportunity to have reliable and valuable test data for the validation and verification of a SFR system analysis code. KAERI joined this international program for the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main objectives of this study were to evaluate the capability of existing SFR system analysis code MARS-LMR and to identify any limitation of the code. The analysis was performed in three stages: pre-test analysis, blind posttest analysis, and final post-test analysis. In the pre-test analysis, the design conditionsmore » provided by the CEA were used to obtain a prediction of the test. The blind post-test analysis was based on the test conditions measured during the tests but the test results were not provided from the CEA. The final post-test analysis was performed to predict the test results as accurate as possible by improving the previous modeling of the test. Based on the pre-test analysis and blind test analysis, the modeling for heat structures in the hot pool and cold pool, steel structures in the core, heat loss from roof and vessel, and the flow path at core outlet were reinforced in the final analysis. The results of the final post-test analysis could be characterized into three different phases. In the early phase, the MARS-LMR simulated the heat-up process correctly due to the enhanced heat structure modeling. In the mid phase before the opening of SG casing, the code reproduced the decrease of core outlet temperature successfully. Finally, in the later phase the increase of heat removal by the opening of the SG opening was well predicted with the MARS-LMR code. (authors)« less

  19. Transonic Drag Prediction on a DLR-F6 Transport Configuration Using Unstructured Grid Solvers

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Frink, N. T.; Mavriplis, D. J.; Rausch, R. D.; Milholen, W. E.

    2004-01-01

    A second international AIAA Drag Prediction Workshop (DPW-II) was organized and held in Orlando Florida on June 21-22, 2003. The primary purpose was to inves- tigate the code-to-code uncertainty. address the sensitivity of the drag prediction to grid size and quantify the uncertainty in predicting nacelle/pylon drag increments at a transonic cruise condition. This paper presents an in-depth analysis of the DPW-II computational results from three state-of-the-art unstructured grid Navier-Stokes flow solvers exercised on similar families of tetrahedral grids. The flow solvers are USM3D - a tetrahedral cell-centered upwind solver. FUN3D - a tetrahedral node-centered upwind solver, and NSU3D - a general element node-centered central-differenced solver. For the wingbody, the total drag predicted for a constant-lift transonic cruise condition showed a decrease in code-to-code variation with grid refinement as expected. For the same flight condition, the wing/body/nacelle/pylon total drag and the nacelle/pylon drag increment predicted showed an increase in code-to-code variation with grid refinement. Although the range in total drag for the wingbody fine grids was only 5 counts, a code-to-code comparison of surface pressures and surface restricted streamlines indicated that the three solvers were not all converging to the same flow solutions- different shock locations and separation patterns were evident. Similarly, the wing/body/nacelle/pylon solutions did not appear to be converging to the same flow solutions. Overall, grid refinement did not consistently improve the correlation with experimental data for either the wingbody or the wing/body/nacelle pylon configuration. Although the absolute values of total drag predicted by two of the solvers for the medium and fine grids did not compare well with the experiment, the incremental drag predictions were within plus or minus 3 counts of the experimental data. The correlation with experimental incremental drag was not significantly changed by specifying transition. Although the sources of code-to-code variation in force and moment predictions for the three unstructured grid codes have not yet been identified, the current study reinforces the necessity of applying multiple codes to the same application to assess uncertainty.

  20. Engine dynamic analysis with general nonlinear finite element codes

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1991-01-01

    A general engine dynamic analysis as a standard design study computational tool is described for the prediction and understanding of complex engine dynamic behavior. Improved definition of engine dynamic response provides valuable information and insights leading to reduced maintenance and overhaul costs on existing engine configurations. Application of advanced engine dynamic simulation methods provides a considerable cost reduction in the development of new engine designs by eliminating some of the trial and error process done with engine hardware development.

  1. Transonic low aspect ratio wing-winglet designs

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Cerney, Michael J.; Liaw, Paul

    1988-01-01

    A numerical design study has been conducted to ascertain the potential of winglets as a drag-reducing measure at high subsonic Mach numbers for low aspect ratio wings. The four variants of the winglet concept studied are a 'detuned' winglet with decreased incidence at the wing-winglet juncture; a steerable winglet; more gradual pressure recovery at the wing and winglet trailing edges; and the application of supercritical airfoil technology. A further study is conducted to assess the accuracy of the numerical code's predicted pressure drag values.

  2. Optimization and Performance Analysis of a Supersonic Conical-Flow Waverider for a Deck-Launched Intercept Mission

    DTIC Science & Technology

    1993-06-01

    radius aid 20 minutes of comibat follovcu by retum to the carrer . A conical-flow waweider served as the starting pount for the aircraft configuration. A...design, test meia adj p teat paramieter siekction were studied for planned low speed wind and water tunnel tests as well as performance predictions fir die... planned win~d tunnel tests. 14. SUBJECT TERMS 15. NUMBER OF PAGES Waveniders, Hypersonics, Aircraft Design 82 `16. PRICE CODE 17. SECURITY

  3. Transition Models for Engineering Calculations

    NASA Technical Reports Server (NTRS)

    Fraser, C. J.

    2007-01-01

    While future theoretical and conceptual developments may promote a better understanding of the physical processes involved in the latter stages of boundary layer transition, the designers of rotodynamic machinery and other fluid dynamic devices need effective transition models now. This presentation will therefore center around the development of of some transition models which have been developed as design aids to improve the prediction codes used in the performance evaluation of gas turbine blading. All models are based on Narasimba's concentrated breakdown and spot growth.

  4. Predictive and Neural Predictive Control of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.

    2000-01-01

    Accomplishments and future work are:(1) Stability analysis: the work completed includes characterization of stability of receding horizon-based MPC in the setting of LQ paradigm. The current work-in-progress includes analyzing local as well as global stability of the closed-loop system under various nonlinearities; for example, actuator nonlinearities; sensor nonlinearities, and other plant nonlinearities. Actuator nonlinearities include three major types of nonlineaxities: saturation, dead-zone, and (0, 00) sector. (2) Robustness analysis: It is shown that receding horizon parameters such as input and output horizon lengths have direct effect on the robustness of the system. (3) Code development: A matlab code has been developed which can simulate various MPC formulations. The current effort is to generalize the code to include ability to handle all plant types and all MPC types. (4) Improved predictor: It is shown that MPC design using better predictors that can minimize prediction errors. It is shown analytically and numerically that Smith predictor can provide closed-loop stability under GPC operation for plants with dead times where standard optimal predictor fails. (5) Neural network predictors: When neural network is used as predictor it can be shown that neural network predicts the plant output within some finite error bound under certain conditions. Our preliminary study shows that with proper choice of update laws and network architectures such bound can be obtained. However, much work needs to be done to obtain a similar result in general case.

  5. Aerodynamic heating environment definition/thermal protection system selection for the HL-20

    NASA Astrophysics Data System (ADS)

    Wurster, K. E.; Stone, H. W.

    1993-09-01

    Definition of the aerothermal environment is critical to any vehicle such as the HL-20 Personnel Launch System that operates within the hypersonic flight regime. Selection of an appropriate thermal protection system design is highly dependent on the accuracy of the heating-environment prediction. It is demonstrated that the entry environment determines the thermal protection system design for this vehicle. The methods used to predict the thermal environment for the HL-20 Personnel Launch System vehicle are described. Comparisons of the engineering solutions with computational fluid dynamic predictions, as well as wind-tunnel test results, show good agreement. The aeroheating predictions over several critical regions of the vehicle, including the stagnation areas of the nose and leading edges, windward centerline and wing surfaces, and leeward surfaces, are discussed. Results of predictions based on the engineering methods found within the MINIVER aerodynamic heating code are used in conjunction with the results of the extensive wind-tunnel tests on this configuration to define a flight thermal environment. Finally, the selection of the thermal protection system based on these predictions and current technology is described.

  6. The DoE method as an efficient tool for modeling the behavior of monocrystalline Si-PV module

    NASA Astrophysics Data System (ADS)

    Kessaissia, Fatma Zohra; Zegaoui, Abdallah; Boutoubat, Mohamed; Allouache, Hadj; Aillerie, Michel; Charles, Jean-Pierre

    2018-05-01

    The objective of this paper is to apply the Design of Experiments (DoE) method to study and to obtain a predictive model of any marketed monocrystalline photovoltaic (mc-PV) module. This technique allows us to have a mathematical model that represents the predicted responses depending upon input factors and experimental data. Therefore, the DoE model for characterization and modeling of mc-PV module behavior can be obtained by just performing a set of experimental trials. The DoE model of the mc-PV panel evaluates the predictive maximum power, as a function of irradiation and temperature in a bounded domain of study for inputs. For the mc-PV panel, the predictive model for both one level and two levels were developed taking into account both influences of the main effect and the interactive effects on the considered factors. The DoE method is then implemented by developing a code under Matlab software. The code allows us to simulate, characterize, and validate the predictive model of the mc-PV panel. The calculated results were compared to the experimental data, errors were estimated, and an accurate validation of the predictive models was evaluated by the surface response. Finally, we conclude that the predictive models reproduce the experimental trials and are defined within a good accuracy.

  7. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  8. MILCOM '85 - Military Communications Conference, Boston, MA, October 20-23, 1985, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on the development status of communications systems in the context of electronic warfare gives attention to topics in spread spectrum code acquisition, digital speech technology, fiber-optics communications, free space optical communications, the networking of HF systems, and applications and evaluation methods for digital speech. Also treated are issues in local area network system design, coding techniques and applications, technology applications for HF systems, receiver technologies, software development status, channel simultion/prediction methods, C3 networking spread spectrum networks, the improvement of communication efficiency and reliability through technical control methods, mobile radio systems, and adaptive antenna arrays. Finally, communications system cost analyses, spread spectrum performance, voice and image coding, switched networks, and microwave GaAs ICs, are considered.

  9. Boundary-Layer Stability Analysis of the Mean Flows Obtained Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liao, Wei; Malik, Mujeeb R.; Lee-Rausch, Elizabeth M.; Li, Fei; Nielsen, Eric J.; Buning, Pieter G.; Chang, Chau-Lyan; Choudhari, Meelan M.

    2012-01-01

    Boundary-layer stability analyses of mean flows extracted from unstructured-grid Navier- Stokes solutions have been performed. A procedure has been developed to extract mean flow profiles from the FUN3D unstructured-grid solutions. Extensive code-to-code validations have been performed by comparing the extracted mean ows as well as the corresponding stability characteristics to the predictions based on structured-grid solutions. Comparisons are made on a range of problems from a simple at plate to a full aircraft configuration-a modified Gulfstream-III with a natural laminar flow glove. The future aim of the project is to extend the adjoint-based design capability in FUN3D to include natural laminar flow and laminar flow control by integrating it with boundary-layer stability analysis codes, such as LASTRAC.

  10. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blyth, Taylor S.; Avramova, Maria

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less

  11. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  12. Emotional Availability Scale Among Three U.S. Race/Ethnic Groups.

    PubMed

    Derscheid, Della J; Fogg, Louis F; Julion, Wrenetha; Johnson, Mary E; Tucker, Sharon; Delaney, Kathleen R

    2018-05-01

    This study used a cross-sectional design to conduct a subgroup psychometric analysis of the Emotional Availability Scale among matched Hispanic ( n = 20), African American ( n = 20), and European American ( n = 10) English-speaking mother-child dyads in the United States. Differences by race/ethnicity were tested ( p < .05) among (a) Emotional Availability Scale dimensions with ANOVA, and (b) relationships of Emotional Availability Scale dimensions with select Dyadic Parent-Child Interaction Coding System variables with Pearson correlation and matched moderated regression. Internal consistency was .950 (Cronbach's α; N = 50). No significant differences in the six Emotional Availability Scale dimension scores by race/ethnicity emerged. Two Dyadic Parent-Child Interaction Coding System behaviors predicted two Emotional Availability Scale dimensions each for Hispanic and African American mother-child dyads. Results suggest emotional availability similarity among race/ethnic subgroups with few predictive differences of emotional availability dimensions by specific behaviors for Hispanic and African American subgroups.

  13. Numerical Prediction of SERN Performance using WIND code

    NASA Technical Reports Server (NTRS)

    Engblom, W. A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redi, M.H.; Mynick, H.E.; Suewattana, M.

    Hamiltonian coordinate, guiding-center code calculations of the confinement of suprathermal ions in quasi-axisymmetric stellarator (QAS) designs have been carried out to evaluate the attractiveness of compact configurations which are optimized for ballooning stability. A new stellarator particle-following code is used to predict ion loss rates and particle confinement for thermal and neutral beam ions in a small experiment with R = 145 cm, B = 1-2 T and for alpha particles in a reactor-size device. In contrast to tokamaks, it is found that high edge poloidal flux has limited value in improving ion confinement in QAS, since collisional pitch-angle scatteringmore » drives ions into ripple wells and stochastic field regions, where they are quickly lost. The necessity for reduced stellarator ripple fields is emphasized. The high neutral beam ion loss predicted for these configurations suggests that more interesting physics could be explored with an experiment of less constrained size and magnetic field geometry.« less

  15. Aerodynamic heating on AFE due to nonequilibrium flow with variable entropy at boundary layer edge

    NASA Technical Reports Server (NTRS)

    Ting, P. C.; Rochelle, W. C.; Bouslog, S. A.; Tam, L. T.; Scott, C. D.; Curry, D. M.

    1991-01-01

    A method of predicting the aerobrake aerothermodynamic environment on the NASA Aeroassist Flight Experiment (AFE) vehicle is described. Results of a three dimensional inviscid nonequilibrium solution are used as input to an axisymmetric nonequilibrium boundary layer program to predict AFE convective heating rates. Inviscid flow field properties are obtained from the Euler option of the Viscous Reacting Flow (VRFLO) code at the boundary layer edge. Heating rates on the AFE surface are generated with the Boundary Layer Integral Matrix Procedure (BLIMP) code for a partially catalytic surface composed of Reusable Surface Insulation (RSI) times. The 1864 kg AFE will fly an aerobraking trajectory, simulating return from geosynchronous Earth orbit, with a 75 km perigee and a 10 km/sec entry velocity. Results of this analysis will provide principal investigators and thermal analysts with aeroheating environments to perform experiment and thermal protection system design.

  16. Modeling, Measurements, and Fundamental Database Development for Nonequilibrium Hypersonic Aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Bose, Deepak

    2012-01-01

    The design of entry vehicles requires predictions of aerothermal environment during the hypersonic phase of their flight trajectories. These predictions are made using computational fluid dynamics (CFD) codes that often rely on physics and chemistry models of nonequilibrium processes. The primary processes of interest are gas phase chemistry, internal energy relaxation, electronic excitation, nonequilibrium emission and absorption of radiation, and gas-surface interaction leading to surface recession and catalytic recombination. NASAs Hypersonics Project is advancing the state-of-the-art in modeling of nonequilibrium phenomena by making detailed spectroscopic measurements in shock tube and arcjets, using ab-initio quantum mechanical techniques develop fundamental chemistry and spectroscopic databases, making fundamental measurements of finite-rate gas surface interactions, implementing of detailed mechanisms in the state-of-the-art CFD codes, The development of new models is based on validation with relevant experiments. We will present the latest developments and a roadmap for the technical areas mentioned above

  17. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  18. Numerical investigation of heat transfer on film-cooled turbine blades.

    PubMed

    Ginibre, P; Lefebvre, M; Liamis, N

    2001-05-01

    The accurate heat transfer prediction of film-cooled blades is a key issue for the aerothermal turbine design. For this purpose, advanced numerical methods have been developed at Snecma Moteurs. The goal of this paper is the assessment of a three-dimensional Navier-Stokes solver, based on the ONERA CANARI-COMET code, devoted to the steady aerothermal computations of film-cooled blades. The code uses a multidomain approach to discretize the blade to blade channel with overlapping structured meshes for the injection holes. The turbulence closure is done by means of either Michel mixing length model or Spalart-Allmaras one transport equation model. Computations of thin 3D slices of three film-cooled nozzle guide vane blades with multiple injections are performed. Aerothermal predictions are compared to experiments carried out by the von Karman Institute. The behavior of the turbulence models is discussed, and velocity and temperature injection profiles are investigated.

  19. NASA Lewis Stirling SPRE testing and analysis with reduced number of cooler tubes

    NASA Technical Reports Server (NTRS)

    Wong, Wayne A.; Cairelli, James E.; Swec, Diane M.; Doeberling, Thomas J.; Lakatos, Thomas F.; Madi, Frank J.

    1992-01-01

    Free-piston Stirling power converters are candidates for high capacity space power applications. The Space Power Research Engine (SPRE), a free-piston Stirling engine coupled with a linear alternator, is being tested at the NASA Lewis Research Center in support of the Civil Space Technology Initiative. The SPRE is used as a test bed for evaluating converter modifications which have the potential to improve the converter performance and for validating computer code predictions. Reducing the number of cooler tubes on the SPRE has been identified as a modification with the potential to significantly improve power and efficiency. Experimental tests designed to investigate the effects of reducing the number of cooler tubes on converter power, efficiency and dynamics are described. Presented are test results from the converter operating with a reduced number of cooler tubes and comparisons between this data and both baseline test data and computer code predictions.

  20. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  1. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  2. A digital communications system for manned spaceflight applications.

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.

    1973-01-01

    A highly efficient, all-digital communications signal design employing convolutional coding and PN spectrum spreading is described for two-way transmission of voice and data between a manned spacecraft and ground. Variable-slope delta modulation is selected for analog/digital conversion of the voice signal, and a convolutional decoder utilizing the Viterbi decoding algorithm is selected for use at each receiving terminal. A PN spread spectrum technique is implemented to protect against multipath effects and to reduce the energy density (per unit bandwidth) impinging on the earth's surface to a value within the guidelines adopted by international agreement. Performance predictions are presented for transmission via a TDRS (tracking and data relay satellite) system and for direct transmission between the spacecraft and earth. Hardware estimates are provided for a flight-qualified communications system employing the coded digital signal design.

  3. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less

  4. Research on Novel High-Power Microwave/Millimeter Wave Sources and Applications

    DTIC Science & Technology

    2010-08-28

    density with acceptable operating temperature and lifetime. The MIG is optimized with the EGUN code for a cath- ode voltage Vb of 100 kV and a beam...emission suppression. Figure 2 is an EGUN drawing of the MIG configuration/ dimensions and electron trajectories. The design is flexible TABLE I. Predicted...and measured MIG parameters. EGUN prediction smooth cathode Measurement Voltage kV 100.0 100.0 Current A 8.0 8.0 0 1.40 1.40 vz /vz0 3.5% 4.6

  5. Air Vehicle Integration and Technology Research (AVIATR). Task Order 0023: Predictive Capability for Hypersonic Structural Response and Life Prediction: Phase 2 - Detailed Design of Hypersonic Cruise Vehicle Hot-Structure

    DTIC Science & Technology

    2012-02-01

    x Approved for public release; distribution unlimited. I-DEAS/ TMG Thermal analysis software IR Initial Review ITAR International Traffic in Arms...the finite element code I- DEAS/ TMG . A mesh refinement study was conducted on the first panel to determine the mesh density required to accurately...ng neer ng, pera ons ec no ogy oe ng esearc ec no ogy • heat transfer analysis conducted with I-DEAS/ TMG exercises mapping of temperatures to

  6. The Design of PSB-VVER Experiments Relevant to Accident Management

    NASA Astrophysics Data System (ADS)

    Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander

    Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.

  7. CELFE/NASTRAN Code for the Analysis of Structures Subjected to High Velocity Impact

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1978-01-01

    CELFE (Coupled Eulerian Lagrangian Finite Element)/NASTRAN Code three-dimensional finite element code has the capability for analyzing of structures subjected to high velocity impact. The local response is predicted by CELFE and, for large problems, the far-field impact response is predicted by NASTRAN. The coupling of the CELFE code with NASTRAN (CELFE/NASTRAN code) and the application of the code to selected three-dimensional high velocity impact problems are described.

  8. Indexing sensory plasticity: Evidence for distinct Predictive Coding and Hebbian learning mechanisms in the cerebral cortex.

    PubMed

    Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D

    2018-04-30

    The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Modeling of short fiber reinforced injection moulded composite

    NASA Astrophysics Data System (ADS)

    Kulkarni, A.; Aswini, N.; Dandekar, C. R.; Makhe, S.

    2012-09-01

    A micromechanics based finite element model (FEM) is developed to facilitate the design of a new production quality fiber reinforced plastic injection molded part. The composite part under study is composed of a polyetheretherketone (PEEK) matrix reinforced with 30% by volume fraction of short carbon fibers. The constitutive material models are obtained by using micromechanics based homogenization theories. The analysis is carried out by successfully coupling two commercial codes, Moldflow and ANSYS. Moldflow software is used to predict the fiber orientation by considering the flow kinetics and molding parameters. Material models are inputted into the commercial software ANSYS as per the predicted fiber orientation and the structural analysis is carried out. Thus in the present approach a coupling between two commercial codes namely Moldflow and ANSYS has been established to enable the analysis of the short fiber reinforced injection moulded composite parts. The load-deflection curve is obtained based on three constitutive material model namely an isotropy, transversely isotropy and orthotropy. Average values of the predicted quantities are compared to experimental results, obtaining a good correlation. In this manner, the coupled Moldflow-ANSYS model successfully predicts the load deflection curve of a composite injection molded part.

  10. Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations

    PubMed Central

    Friston, Karl J.; Kleinschmidt, Andreas

    2010-01-01

    Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se. PMID:20369004

  11. GI-POP: a combinational annotation and genomic island prediction pipeline for ongoing microbial genome projects.

    PubMed

    Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi

    2013-04-10

    Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Xpatch prediction improvements to support multiple ATR applications

    NASA Astrophysics Data System (ADS)

    Andersh, Dennis J.; Lee, Shung W.; Moore, John T.; Sullivan, Douglas P.; Hughes, Jeff A.; Ling, Hao

    1998-08-01

    This paper describes an electromagnetic computer prediction code for generating radar cross section (RCS), time-domain signature sand synthetic aperture radar (SAR) images of realistic 3D vehicles. The vehicle, typically an airplane or a ground vehicle, is represented by a computer-aided design (CAD) file with triangular facets, IGES curved surfaces, or solid geometries.The computer code, Xpatch, based on the shooting-and-bouncing-ray technique, is used to calculate the polarimetric radar return from the vehicles represented by these different CAD files. Xpatch computers the first- bounce physical optics (PO) plus the physical theory of diffraction (PTD) contributions. Xpatch calculates the multi-bounce ray contributions by using geometric optics and PO for complex vehicles with materials. It has been found that the multi-bounce calculations, the radar return in typically 10 to 15 dB too low. Examples of predicted range profiles, SAR, imagery, and RCS for several different geometries are compared with measured data to demonstrate the quality of the predictions. Recent enhancements to Xpatch include improvements for millimeter wave applications and hybridization with finite element method for small geometric features and augmentation of additional IGES entities to support trimmed and untrimmed surfaces.

  13. Correlation approach to identify coding regions in DNA sequences

    NASA Technical Reports Server (NTRS)

    Ossadnik, S. M.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1994-01-01

    Recently, it was observed that noncoding regions of DNA sequences possess long-range power-law correlations, whereas coding regions typically display only short-range correlations. We develop an algorithm based on this finding that enables investigators to perform a statistical analysis on long DNA sequences to locate possible coding regions. The algorithm is particularly successful in predicting the location of lengthy coding regions. For example, for the complete genome of yeast chromosome III (315,344 nucleotides), at least 82% of the predictions correspond to putative coding regions; the algorithm correctly identified all coding regions larger than 3000 nucleotides, 92% of coding regions between 2000 and 3000 nucleotides long, and 79% of coding regions between 1000 and 2000 nucleotides. The predictive ability of this new algorithm supports the claim that there is a fundamental difference in the correlation property between coding and noncoding sequences. This algorithm, which is not species-dependent, can be implemented with other techniques for rapidly and accurately locating relatively long coding regions in genomic sequences.

  14. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.

  15. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.

  16. Designing and Testing a Blended Wing Body with Boundary Layer Ingestion Nacelles

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Campbell, Richard L.; Pendergraft, Odis C.; Underwood, Pamela J.; Friedman, Douglas M.; Serrano, Leonel

    2006-01-01

    A knowledge-based aerodynamic design method coupled with an unstructured grid Navier-Stokes flow solver was used to improve the propulsion/airframe integration for a Blended Wing Body with boundary-layer ingestion nacelles. A new zonal design capability was used that significantly reduced the time required to achieve a successful design for each nacelle and the elevon between them. A wind tunnel model was built with interchangeable parts reflecting the baseline and redesigned configurations and tested in the National Transonic Facility (NTF). Most of the testing was done at the cruise design conditions (Mach number = 0.85, Reynolds number = 75 million). In general, the predicted improvements in forces and moments as well as the changes in wing pressures between the baseline and redesign were confirmed by the wind tunnel results. The effectiveness of elevons between the nacelles was also predicted surprisingly well considering the crudeness in the modeling of the control surfaces in the flow code.

  17. Genetic circuit design automation.

    PubMed

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. Copyright © 2016, American Association for the Advancement of Science.

  18. A computational study of coherent structures in the wakes of two-dimensional bluff bodies

    NASA Astrophysics Data System (ADS)

    Pearce, Jeffrey Alan

    1988-08-01

    The periodic shedding of vortices from bluff bodies was first recognized in the late 1800's. Currently, there is great interest concerning the effect of vortex shedding on structures and on vehicle stability. In the design of bluff structures which will be exposed to a flow, knowledge of the shedding frequency and the amplitude of the aerodynamic forces is critical. The ability to computationally predict parameters associated with periodic vortex shedding is thus a valuable tool. In this study, the periodic shedding of vortices from several bluff body geometries is predicted. The study is conducted with a two-dimensional finite-difference code employed on various grid sizes. The effects of the grid size and time step on the accuracy of the solution are addressed. Strouhal numbers and aerodynamic force coefficients are computed for all of the bodies considered and compared with previous experimental results. Results indicate that the finite-difference code is capable of predicting periodic vortex shedding for all of the geometries tested. Refinement of the finite-difference grid was found to give little improvement in the prediction; however, the choice of time step size was shown to be critical. Predictions of Strouhal numbers were generally accurate, and the calculated aerodynamic forces generally exhibited behavior consistent with previous studies.

  19. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  20. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, M. T.

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input thatmore » describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of improvements that are documented in this report have been specifically implemented to support industry in developing Severe Accident Water Management (SAWM) strategies for Boiling Water Reactors.« less

  1. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  2. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  3. Perspectives On Dilution Jet Mixing

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.; Srinivasan, R.

    1990-01-01

    NASA recently completed program of measurements and modeling of mixing of transverse jets with ducted crossflow, motivated by need to design or tailor temperature pattern at combustor exit in gas turbine engines. Objectives of program to identify dominant physical mechanisms governing mixing, extend empirical models to provide near-term predictive capability, and compare numerical code calculations with data to guide future analysis improvement efforts.

  4. Reliability Analysis of Brittle Material Structures - Including MEMS(?) - With the CARES/Life Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2002-01-01

    Brittle materials are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts. thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The CARES/Life code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. For this presentation an interview of the CARES/Life program will be provided. Emphasis will be placed on describing the latest enhancements to the code for reliability analysis with time varying loads and temperatures (fully transient reliability analysis). Also, early efforts in investigating the validity of using Weibull statistics, the basis of the CARES/Life program, to characterize the strength of MEMS structures will be described as as well as the version of CARES/Life for MEMS (CARES/MEMS) being prepared which incorporates single crystal and edge flaw reliability analysis capability. It is hoped this talk will open a dialog for potential collaboration in the area of MEMS testing and life prediction.

  5. Design optimization of beta- and photovoltaic conversion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wichner, R.; Blum, A.; Fischer-Colbrie, E.

    1976-01-08

    This report presents the theoretical and experimental results of an LLL Electronics Engineering research program aimed at optimizing the design and electronic-material parameters of beta- and photovoltaic p-n junction conversion devices. To meet this objective, a comprehensive computer code has been developed that can handle a broad range of practical conditions. The physical model upon which the code is based is described first. Then, an example is given of a set of optimization calculations along with the resulting optimized efficiencies for silicon (Si) and gallium-arsenide (GaAs) devices. The model we have developed, however, is not limited to these materials. Itmore » can handle any appropriate material--single or polycrystalline-- provided energy absorption and electron-transport data are available. To check code validity, the performance of experimental silicon p-n junction devices (produced in-house) were measured under various light intensities and spectra as well as under tritium beta irradiation. The results of these tests were then compared with predicted results based on the known or best estimated device parameters. The comparison showed very good agreement between the calculated and the measured results.« less

  6. Simulation and optimization study of a solar seasonal storage district heating system: the Fox River Valley case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michaels, A.I.; Sillman, S.; Baylin, F.

    1983-05-01

    A central solar-heating plant with seasonal heat storage in a deep underground aquifer is designed by means of a solar-seasonal-storage-system simulation code based on the Solar Energy Research Institute (SERI) code for Solar Annual Storage Simulation (SASS). This Solar Seasonal Storage Plant is designed to supply close to 100% of the annual heating and domestic-hot-water (DHW) load of a hypothetical new community, the Fox River Valley Project, for a location in Madison, Wisconsin. Some analyses are also carried out for Boston, Massachusetts and Copenhagen, Denmark, as an indication of weather and insolation effects. Analyses are conducted for five different typesmore » of solar collectors, and for an alternate system utilizing seasonal storage in a large water tank. Predicted seasonal performance and system and storage costs are calculated. To provide some validation of the SASS results, a simulation of the solar system with seasonal storage in a large water tank is also carried out with a modified version of the Swedish Solar Seasonal Storage Code MINSUN.« less

  7. Investigation of thermal protection systems effects on viscid and inviscid flow fields for manned entry systems

    NASA Technical Reports Server (NTRS)

    Bartlett, E. P.; Morse, H. L.; Tong, H.

    1971-01-01

    Procedures and methods for predicting aerothermodynamic heating to delta orbiter shuttle vehicles were reviewed. A number of approximate methods were found to be adequate for large scale parameter studies, but are considered inadequate for final design calculations. It is recommended that final design calculations be based on a computer code which accounts for nonequilibrium chemistry, streamline spreading, entropy swallowing, and turbulence. It is further recommended that this code be developed with the intent that it can be directly coupled with an exact inviscid flow field calculation when the latter becomes available. A nonsimilar, equilibrium chemistry computer code (BLIMP) was used to evaluate the effects of entropy swallowing, turbulence, and various three dimensional approximations. These solutions were compared with available wind tunnel data. It was found study that, for wind tunnel conditions, the effect of entropy swallowing and three dimensionality are small for laminar boundary layers but entropy swallowing causes a significant increase in turbulent heat transfer. However, it is noted that even small effects (say, 10-20%) may be important for the shuttle reusability concept.

  8. Experimental validation of a direct simulation by Monte Carlo molecular gas flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shufflebotham, P.K.; Bartel, T.J.; Berney, B.

    1995-07-01

    The Sandia direct simulation Monte Carlo (DSMC) molecular/transition gas flow simulation code has significant potential as a computer-aided design tool for the design of vacuum systems in low pressure plasma processing equipment. The purpose of this work was to verify the accuracy of this code through direct comparison to experiment. To test the DSMC model, a fully instrumented, axisymmetric vacuum test cell was constructed, and spatially resolved pressure measurements made in N{sub 2} at flows from 50 to 500 sccm. In a ``blind`` test, the DSMC code was used to model the experimental conditions directly, and the results compared tomore » the measurements. It was found that the model predicted all the experimental findings to a high degree of accuracy. Only one modeling issue was uncovered. The axisymmetric model showed localized low pressure spots along the axis next to surfaces. Although this artifact did not significantly alter the accuracy of the results, it did add noise to the axial data. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less

  9. CRISPR library designer (CLD): software for multispecies design of single guide RNA libraries.

    PubMed

    Heigwer, Florian; Zhan, Tianzuo; Breinig, Marco; Winter, Jan; Brügemann, Dirk; Leible, Svenja; Boutros, Michael

    2016-03-24

    Genetic screens using CRISPR/Cas9 are a powerful method for the functional analysis of genomes. Here we describe CRISPR library designer (CLD), an integrated bioinformatics application for the design of custom single guide RNA (sgRNA) libraries for all organisms with annotated genomes. CLD is suitable for the design of libraries using modified CRISPR enzymes and targeting non-coding regions. To demonstrate its utility, we perform a pooled screen for modulators of the TNF-related apoptosis inducing ligand (TRAIL) pathway using a custom library of 12,471 sgRNAs. CLD predicts a high fraction of functional sgRNAs and is publicly available at https://github.com/boutroslab/cld.

  10. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.

  11. Continued development and correlation of analytically based weight estimation codes for wings and fuselages

    NASA Technical Reports Server (NTRS)

    Mullen, J., Jr.

    1978-01-01

    The implementation of the changes to the program for Wing Aeroelastic Design and the development of a program to estimate aircraft fuselage weights are described. The equations to implement the modified planform description, the stiffened panel skin representation, the trim loads calculation, and the flutter constraint approximation are presented. A comparison of the wing model with the actual F-5A weight material distributions and loads is given. The equations and program techniques used for the estimation of aircraft fuselage weights are described. These equations were incorporated as a computer code. The weight predictions of this program are compared with data from the C-141.

  12. Common spaceborne multicomputer operating system and development environment

    NASA Technical Reports Server (NTRS)

    Craymer, L. G.; Lewis, B. F.; Hayes, P. J.; Jones, R. L.

    1994-01-01

    A preliminary technical specification for a multicomputer operating system is developed. The operating system is targeted for spaceborne flight missions and provides a broad range of real-time functionality, dynamic remote code-patching capability, and system fault tolerance and long-term survivability features. Dataflow concepts are used for representing application algorithms. Functional features are included to ensure real-time predictability for a class of algorithms which require data-driven execution on an iterative steady state basis. The development environment supports the development of algorithm code, design of control parameters, performance analysis, simulation of real-time dataflow applications, and compiling and downloading of the resulting application.

  13. Program to develop a performance and heat load prediction system for multistage turbines

    NASA Technical Reports Server (NTRS)

    Sharma, OM

    1994-01-01

    Flows in low-aspect ratio turbines, such as the SSME fuel turbine, are three dimensional and highly unsteady due to the relative motion of adjacent airfoil rows and the circumferential and spanwise gradients in total pressure and temperature, The systems used to design these machines, however, are based on the assumption that the flow is steady. The codes utilized in these design systems are calibrated against turbine rig and engine data through the use of empirical correlations and experience factors. For high aspect ratio turbines, these codes yield reasonably accurate estimates of flow and temperature distributions. However, future design trends will see lower aspect ratio (reduced number of parts) and higher inlet temperature which will result in increased three dimensionality and flow unsteadiness in turbines. Analysis of recently acquired data indicate that temperature streaks and secondary flows generated in combustors and up-stream airfoils can have a large impact on the time-averaged temperature and angle distributions in downstream airfoil rows.

  14. Low thrust chemical rocket technology

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    1992-01-01

    An on-going technology program to improve the performance of low thrust chemical rockets for spacecraft on-board propulsion applications is reviewed. Improved performance and lifetime is sought by the development of new predictive tools to understand the combustion and flow physics, introduction of high temperature materials and improved component designs to optimize performance, and use of higher performance propellants. Improved predictive technology is sought through the comparison of both local and global predictions with experimental data. Predictions are based on both the RPLUS Navier-Stokes code with finite rate kinetics and the JANNAF methodology. Data were obtained with laser-based diagnostics along with global performance measurements. Results indicate that the modeling of the injector and the combustion process needs improvement in these codes and flow visualization with a technique such as 2-D laser induced fluorescence (LIF) would aid in resolving issues of flow symmetry and shear layer combustion processes. High temperature material fabrication processes are under development and small rockets are being designed, fabricated, and tested using these new materials. Rhenium coated with iridium for oxidation protection was produced by the Chemical Vapor Deposition (CVD) process and enabled an 800 K increase in rocket operating temperature. Performance gains with this material in rockets using Earth storable propellants (nitrogen tetroxide and monomethylhydrazine or hydrazine) were obtained through component redesign to eliminate fuel film cooling and its associated combustion inefficiency while managing head end thermal soakback. Material interdiffusion and oxidation characteristics indicated that the requisite lifetimes of tens of hours were available for thruster applications. Rockets were designed, fabricated, and tested with thrusts of 22, 62, 440 and 550 N. Performance improvements of 10 to 20 seconds specific impulse were demonstrated. Higher performance propellants were evaluated: Space storable propellants, including liquid oxygen (LOX) as the oxidizer with nitrogen hydrides or hydrocarbon as fuels. Specifically, a LOX/hydrazine engine was designed, fabricated, and shown to have a 95 pct theoretical c-star which translates into a projected vacuum specific impulse of 345 seconds at an area ratio of 204:1. Further performance improvment can be obtained by the use of LOX/hydrogen propellants, especially for manned spacecraft applications, and specific designs must be developed and advanced through flight qualification.

  15. Culture and Healthy Eating: The Role of Independence and Interdependence in the United States and Japan.

    PubMed

    Levine, Cynthia S; Miyamoto, Yuri; Markus, Hazel Rose; Rigotti, Attilio; Boylan, Jennifer Morozink; Park, Jiyoung; Kitayama, Shinobu; Karasawa, Mayumi; Kawakami, Norito; Coe, Christopher L; Love, Gayle D; Ryff, Carol D

    2016-10-01

    Healthy eating is important for physical health. Using large probability samples of middle-aged adults in the United States and Japan, we show that fitting with the culturally normative way of being predicts healthy eating. In the United States, a culture that prioritizes and emphasizes independence, being independent predicts eating a healthy diet (an index of fish, protein, fruit, vegetables, reverse-coded sugared beverages, and reverse-coded high fat meat consumption; Study 1) and not using nonmeat food as a way to cope with stress (Study 2a). In Japan, a culture that prioritizes and emphasizes interdependence, being interdependent predicts eating a healthy diet (Studies 1 and 2b). Furthermore, reflecting the types of agency that are prevalent in each context, these relationships are mediated by autonomy in the United States and positive relations with others in Japan. These findings highlight the importance of understanding cultural differences in shaping healthy behavior and have implications for designing health-promoting interventions. © 2016 by the Society for Personality and Social Psychology, Inc.

  16. Advanced turboprop noise prediction based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Padula, S. L.; Dunn, M. H.

    1987-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  17. A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.

    PubMed

    Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan

    2014-07-18

    Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp.

  18. PACCMIT/PACCMIT-CDS: identifying microRNA targets in 3' UTRs and coding sequences.

    PubMed

    Šulc, Miroslav; Marín, Ray M; Robins, Harlan S; Vaníček, Jiří

    2015-07-01

    The purpose of the proposed web server, publicly available at http://paccmit.epfl.ch, is to provide a user-friendly interface to two algorithms for predicting messenger RNA (mRNA) molecules regulated by microRNAs: (i) PACCMIT (Prediction of ACcessible and/or Conserved MIcroRNA Targets), which identifies primarily mRNA transcripts targeted in their 3' untranslated regions (3' UTRs), and (ii) PACCMIT-CDS, designed to find mRNAs targeted within their coding sequences (CDSs). While PACCMIT belongs among the accurate algorithms for predicting conserved microRNA targets in the 3' UTRs, the main contribution of the web server is 2-fold: PACCMIT provides an accurate tool for predicting targets also of weakly conserved or non-conserved microRNAs, whereas PACCMIT-CDS addresses the lack of similar portals adapted specifically for targets in CDS. The web server asks the user for microRNAs and mRNAs to be analyzed, accesses the precomputed P-values for all microRNA-mRNA pairs from a database for all mRNAs and microRNAs in a given species, ranks the predicted microRNA-mRNA pairs, evaluates their significance according to the false discovery rate and finally displays the predictions in a tabular form. The results are also available for download in several standard formats. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Vfold: a web server for RNA structure and folding thermodynamics prediction.

    PubMed

    Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie

    2014-01-01

    The ever increasing discovery of non-coding RNAs leads to unprecedented demand for the accurate modeling of RNA folding, including the predictions of two-dimensional (base pair) and three-dimensional all-atom structures and folding stabilities. Accurate modeling of RNA structure and stability has far-reaching impact on our understanding of RNA functions in human health and our ability to design RNA-based therapeutic strategies. The Vfold server offers a web interface to predict (a) RNA two-dimensional structure from the nucleotide sequence, (b) three-dimensional structure from the two-dimensional structure and the sequence, and (c) folding thermodynamics (heat capacity melting curve) from the sequence. To predict the two-dimensional structure (base pairs), the server generates an ensemble of structures, including loop structures with the different intra-loop mismatches, and evaluates the free energies using the experimental parameters for the base stacks and the loop entropy parameters given by a coarse-grained RNA folding model (the Vfold model) for the loops. To predict the three-dimensional structure, the server assembles the motif scaffolds using structure templates extracted from the known PDB structures and refines the structure using all-atom energy minimization. The Vfold-based web server provides a user friendly tool for the prediction of RNA structure and stability. The web server and the source codes are freely accessible for public use at "http://rna.physics.missouri.edu".

  20. Numerical Assessment of Four-Port Through-Flow Wave Rotor Cycles with Passage Height Variation

    NASA Technical Reports Server (NTRS)

    Paxson, D. E.; Lindau, Jules W.

    1997-01-01

    The potential for improved performance of wave rotor cycles through the use of passage height variation is examined. A Quasi-one-dimensional CFD code with experimentally validated loss models is used to determine the flowfield in the wave rotor passages. Results indicate that a carefully chosen passage height profile can produce substantial performance gains. Numerical performance data are presented for a specific profile, in a four-port, through-flow cycle design which yielded a computed 4.6% increase in design point pressure ratio over a comparably sized rotor with constant passage height. In a small gas turbine topping cycle application, this increased pressure ratio would reduce specific fuel consumption to 22% below the un-topped engine; a significant improvement over the already impressive 18% reductions predicted for the constant passage height rotor. The simulation code is briefly described. The method used to obtain rotor passage height profiles with enhanced performance is presented. Design and off-design results are shown using two different computational techniques. The paper concludes with some recommendations for further work.

  1. Aeroelastic stability analyses of two counter rotating propfan designs for a cruise missile model

    NASA Technical Reports Server (NTRS)

    Mahajan, Aparajit J.; Lucero, John M.; Mehmed, Oral; Stefko, George L.

    1992-01-01

    Aeroelastic stability analyses were performed to insure structural integrity of two counterrotating propfan blade designs for a NAVY/Air Force/NASA cruise missile model wind tunnel test. This analysis predicted if the propfan designs would be flutter free at the operating conditions of the wind tunnel test. Calculated stability results are presented for the two blade designs with rotational speed and Mach number as the parameters. A aeroelastic analysis code ASTROP2 (Aeroelastic Stability and Response of Propulsion Systems - 2 Dimensional Analysis), developed at LeRC, was used in this project. The aeroelastic analysis is a modal method and uses the combination of a finite element structural model and two dimensional steady and unsteady cascade aerodynamic models. This code was developed to analyze single rotation propfans but was modified and applied to counterrotating propfans for the present work. Modifications were made to transform the geometry and rotation of the aft rotor to the same reference frame as the forward rotor, to input a non-uniform inflow into the rotor being analyzed, and to automatically converge to the least stable aeroelastic mode.

  2. MINIVER upgrade for the AVID system. Volume 3: EXITS user's and input guide

    NASA Technical Reports Server (NTRS)

    Pond, J. E.; Schmitz, C. P.

    1983-01-01

    The successful design of thermal protection systems for vehicles operating in atmosphere and near-space environments requires accurate analyses of heating rate and temperature histories encountered along a trajectory. For preliminary design calculations, however, the requirement for accuracy must be tempered by the need for speed and versatility in computational tools used to determine thermal environments and structural thermal response. The MINIVER program was found to provide the proper balance between versatility, speed and accuracy for an aerothermal prediction tool. The advancement in computer aided design concepts at Langley Research Center (LaRC) in the past few years has made it desirable to incorporate the MINIVER program into the LaRC Advanced Vehicle Integrated Design, AVID, system. In order to effectively incorporate MINIVER into the AVID system, several changes to MINIVER were made. The thermal conduction options in MINIVER were removed and a new Explicit Interactive Thermal Structures (EXITS) code was developed. Many upgrades to the MINIVER code were made and a new Langley version of MINIVER called LANMIN was created.

  3. MINIVER upgrade for the AVID system. Volume 1: LANMIN user's manual

    NASA Technical Reports Server (NTRS)

    Engel, C. D.; Praharaj, S. C.

    1983-01-01

    The successful design of thermal protection systems for vehicles operating in atmosphere and near space environments requires accurate analyses of heating rate and temperature histories encountered along a trajectory. For preliminary design calculations, however, the requirement for accuracy must be tempered by the need for speed and versatility in computational tools used to determine thermal environments and structural thermal response. The MINIVER program has been found to provide the proper balance between versatility, speed and accuracy for an aerothermal prediction tool. The advancement in computer aided design concepts at Langley Research Center (LaRC) in the past few years has made it desirable to incorporate the MINIVER program into the LaRC Advanced Vehicle Integrated Design, AVID, system. In order to effectively incorporate MINIVER into the AVID system, several changes to MINIVER were made. The thermal conduction options in MINIVER were removed and a new Explicit Interactive Thermal Structures (EXITS) code was developed. Many upgrades to the MINIVER code were made and a new Langley version of MINIVER called LANMIN was created. The theoretical methods and subroutine functions used in LANMIN are described.

  4. High Speed Research (HSR) Multi-Year Summary Report for Calendar Years 1995-1999

    NASA Technical Reports Server (NTRS)

    Baker, Myles; Boyd, William

    1999-01-01

    The Aeroelasticity Task is intended to provide demonstrated technology readiness to predict and improve flutter characteristics of an HSCT configuration. This requires aerodynamic codes that are applicable to the wide range of flight regimes in which the HSCT will operate, and are suitable to provide the higher fidelity required for evaluation of aeroservoelastic coupling effects. Prediction of these characteristics will result in reduced airplane weight and risk associated with a highly flexible, low-aspect ratio supersonic airplane with narrow fuselage, relatively thin wings, and heavy engines. This Task is subdivided into three subtasks. The first subtask includes the design, fabrication, and testing of wind-tunnel models suitable to provide an experimental database relevant to HSCT configurations. The second subtask includes validation of candidate unsteady aerodynamic codes, applicable in the Mach and frequency ranges of interest for the HSCT, through analysis test correlation with the test data. The third subtask includes efforts to develop and enhance these codes for application to HSCT configurations. The wind tunnel models designed and constructed during this program furnished data which were useful for the analysis test correlation work but there were shortcomings. There was initial uncertainty in the proper tunnel configuration for testing, there was a need for higher quality measured model geometry, and there was a need for better measured model displacements in the test data. One of the models exhibited changes in its dynamic characteristics during testing. Model design efforts were hampered by a need for more and earlier analysis support and better knowledge of material properties. Success of the analysis test correlation work was somewhat muted by the uncertainties in the wind tunnel model data. The planned extent of the test data was not achieved, partly due to the delays in the model design and fabrication which could not be extended due to termination of the HSR program.

  5. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  6. Broadband Polarization Conversion Metasurface Based on Metal Cut-Wire Structure for Radar Cross Section Reduction.

    PubMed

    Yang, Jia Ji; Cheng, Yong Zhi; Ge, Chen Chen; Gong, Rong Zhou

    2018-04-19

    A class of linear polarization conversion coding metasurfaces (MSs) based on a metal cut-wire structure is proposed, which can be applied to the reduction properties of radar cross section (RCS). We firstly present a hypothesis based on the principle of planar array theory, and then verify the RCS reduction characteristics using linear polarization conversion coding MSs by simulations and experiments. The simulated results show that in the frequency range of 6⁻14 GHz, the linear polarization conversion ratio reaches a maximum value of 90%, which is in good agreement with the theoretical predictions. For normal incident x - and y -polarized waves, RCS reduction of designed coding MSs 01/01 and 01/10 is essentially more than 10 dB in the above-mentioned frequency range. We prepare and measure the 01/10 coding MS sample, and find that the experimental results in terms of reflectance and RCS reduction are in good agreement with the simulated ones under normal incidence. In addition, under oblique incidence, RCS reduction is suppressed as the angle of incidence increases, but still exhibits RCS reduction effects in a certain frequency range. The designed MS is expected to have valuable potential in applications for stealth field technology.

  7. Internal Flow Analysis of Large L/D Solid Rocket Motors

    NASA Technical Reports Server (NTRS)

    Laubacher, Brian A.

    2000-01-01

    Traditionally, Solid Rocket Motor (SRM) internal ballistic performance has been analyzed and predicted with either zero-dimensional (volume filling) codes or one-dimensional ballistics codes. One dimensional simulation of SRM performance is only necessary for ignition modeling, or for motors that have large length to port diameter ratios which exhibit an axial "pressure drop" during the early burn times. This type of prediction works quite well for many types of motors, however, when motor aspect ratios get large, and port to throat ratios get closer to one, two dimensional effects can become significant. The initial propellant grain configuration for the Space Shuttle Reusable Solid Rocket Motor (RSRM) was analyzed with 2-D, steady, axi-symmetric computational fluid dynamics (CFD). The results of the CFD analysis show that the steady-state performance prediction at the initial burn geometry, in general, agrees well with 1-D transient prediction results at an early time, however, significant features of the 2-D flow are captured with the CFD results that would otherwise go unnoticed. Capturing these subtle differences gives a greater confidence to modeling accuracy, and additional insight with which to model secondary internal flow effects like erosive burning. Detailed analysis of the 2-D flowfield has led to the discovery of its hidden 1-D isentropic behavior, and provided the means for a thorough and simplified understanding of internal solid rocket motor flow. Performance parameters such as nozzle stagnation pressure, static pressure drop, characteristic velocity, thrust and specific impulse are discussed in detail and compared for different modeling and prediction methods. The predicted performance using both the 1-D codes and the CFD results are compared with measured data obtained from static tests of the RSRM. The differences and limitations of predictions using ID and 2-D flow fields are discussed and some suggestions for the design of large L/D motors and more critically, motors with port to throat ratios near one, are covered.

  8. Development of Probabilistic Life Prediction Methodologies and Testing Strategies for MEMS and CMC's

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama

    2003-01-01

    This effort is to investigate probabilistic life prediction methodologies for ceramic matrix composites and MicroElectroMechanical Systems (MEMS) and to analyze designs that determine stochastic properties of MEMS. For CMC's this includes a brief literature survey regarding lifing methodologies. Also of interest for MEMS is the design of a proper test for the Weibull size effect in thin film (bulge test) specimens. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. A main objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures/Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads. A second set of objectives is to determine the applicability/suitability of the CARES/Life methodology for CMC analysis, what changes would be needed to the methodology and software, and if feasible, run a demonstration problem. Also important is an evaluation of CARES/Life coupled to the ANSYS Probabilistic Design System (PDS) and the potential of coupling transient reliability analysis to the ANSYS PDS.

  9. Design and experimental evaluation of compact radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Fredmonski, A. J.; Huber, F. W.; Roelke, R. J.; Simonyi, S.

    1991-01-01

    The application of a multistage 3D Euler solver to the aerodynamic design of two compact radial-inflow turbines is presented, along with experimental results evaluating and validating the designs. The objectives of the program were to design, fabricate, and rig test compact radial-inflow turbines with equal or better efficiency relative to conventional designs, while having 40 percent less rotor length than current traditionally-sized radial turbines. The approach to achieving these objectives was to apply a calibrated 3D multistage Euler code to accurately predict and control the high rotor flow passage velocities and high aerodynamic loadings resulting from the reduction in rotor length. A comparison of the advanced compact designs to current state-of-the-art configurations is presented.

  10. Light transport feature for SCINFUL.

    PubMed

    Etaati, G R; Ghal-Eh, N

    2008-03-01

    An extended version of the scintillator response function prediction code SCINFUL has been developed by incorporating PHOTRACK, a Monte Carlo light transport code. Comparisons of calculated and experimental results for organic scintillators exposed to neutrons show that the extended code improves the predictive capability of SCINFUL.

  11. A high burnup model developed for the DIONISIO code

    NASA Astrophysics Data System (ADS)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  12. Gigaflop (billion floating point operations per second) performance for computational electromagnetics

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.

    1992-01-01

    Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.

  13. Comprehensive Analysis Modeling of Small-Scale UAS Rotors

    NASA Technical Reports Server (NTRS)

    Russell, Carl R.; Sekula, Martin K.

    2017-01-01

    Multicopter unmanned aircraft systems (UAS), or drones, have continued their explosive growth in recent years. With this growth comes demand for increased performance as the limits of existing technologies are reached. In order to better design multicopter UAS aircraft, better performance prediction tools are needed. This paper presents the results of a study aimed at using the rotorcraft comprehensive analysis code CAMRAD II to model a multicopter UAS rotor in hover. Parametric studies were performed to determine the level of fidelity needed in the analysis code inputs to achieve results that match test data. Overall, the results show that CAMRAD II is well suited to model small-scale UAS rotors in hover. This paper presents the results of the parametric studies as well as recommendations for the application of comprehensive analysis codes to multicopter UAS rotors.

  14. Investigation of Natural Circulation Instability and Transients in Passively Safe Small Modular Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishii, Mamoru

    The NEUP funded project, NEUP-3496, aims to experimentally investigate two-phase natural circulation flow instability that could occur in Small Modular Reactors (SMRs), especially for natural circulation SMRs. The objective has been achieved by systematically performing tests to study the general natural circulation instability characteristics and the natural circulation behavior under start-up or design basis accident conditions. Experimental data sets highlighting the effect of void reactivity feedback as well as the effect of power ramp-up rate and system pressure have been used to develop a comprehensive stability map. The safety analysis code, RELAP5, has been used to evaluate experimental results andmore » models. Improvements to the constitutive relations for flashing have been made in order to develop a reliable analysis tool. This research has been focusing on two generic SMR designs, i.e. a small modular Simplified Boiling Water Reactor (SBWR) like design and a small integral Pressurized Water Reactor (PWR) like design. A BWR-type natural circulation test facility was firstly built based on the three-level scaling analysis of the Purdue Novel Modular Reactor (NMR) with an electric output of 50 MWe, namely NMR-50, which represents a BWR-type SMR with a significantly reduced reactor pressure vessel (RPV) height. The experimental facility was installed with various equipment to measure thermalhydraulic parameters such as pressure, temperature, mass flow rate and void fraction. Characterization tests were performed before the startup transient tests and quasi-steady tests to determine the loop flow resistance. The control system and data acquisition system were programmed with LabVIEW to realize the realtime control and data storage. The thermal-hydraulic and nuclear coupled startup transients were performed to investigate the flow instabilities at low pressure and low power conditions for NMR-50. Two different power ramps were chosen to study the effect of startup power density on the flow instability. The experimental startup transient results showed the existence of three different flow instability mechanisms, i.e., flashing instability, condensation induced flow instability, and density wave oscillations. In addition, the void-reactivity feedback did not have significant effects on the flow instability during the startup transients for NMR-50. ii Several initial startup procedures with different power ramp rates were experimentally investigated to eliminate the flow instabilities observed from the startup transients. Particularly, the very slow startup transient and pressurized startup transient tests were performed and compared. It was found that the very slow startup transients by applying very small power density can eliminate the flashing oscillations in the single-phase natural circulation and stabilize the flow oscillations in the phase of net vapor generation. The initially pressurized startup procedure was tested to eliminate the flashing instability during the startup transients as well. The pressurized startup procedure included the initial pressurization, heat-up, and venting process. The startup transient tests showed that the pressurized startup procedure could eliminate the flow instability during the transition from single-phase flow to two-phase flow at low pressure conditions. The experimental results indicated that both startup procedures were applicable to the initial startup of NMR. However, the pressurized startup procedures might be preferred due to short operating hours required. In order to have a deeper understanding of natural circulation flow instability, the quasi-steady tests were performed using the test facility installed with preheater and subcooler. The effect of system pressure, core inlet subcooling, core power density, inlet flow resistance coefficient, and void reactivity feedback were investigated in the quasi-steady state tests. The experimental stability boundaries were determined between unstable and stable flow conditions in the dimensionless stability plane of inlet subcooling number and Zuber number. To predict the stability boundary theoretically, linear stability analysis in the frequency domain was performed at four sections of the natural circulation test loop. The flashing phenomena in the chimney section was considered as an axially uniform heat source. And the dimensionless characteristic equation of the pressure drop perturbation was obtained by considering the void fraction effect and outlet flow resistance in the core section. The theoretical flashing boundary showed some discrepancies with previous experimental data from the quasi-steady state tests. In the future, thermal non-equilibrium was recommended to improve the accuracy of flashing instability boundary. As another part of the funded research, flow instabilities of a PWR-type SMR under low pressure and low power conditions were investigated experimentally as well. The NuScale reactor design was selected as the prototype for the PWR-type SMR. In order to experimentally study the natural circulation behavior of NuScale iii reactor during accidental scenarios, detailed scaling analyses are necessary to ensure that the scaled phenomena could be obtained in a laboratory test facility. The three-level scaling method is used as well to obtain the scaling ratios derived from various non-dimensional numbers. The design of the ideally scaled facility (ISF) was initially accomplished based on these scaling ratios. Then the engineering scaled facility (ESF) was designed and constructed based on the ISF by considering engineering limitations including laboratory space, pipe size, and pipe connections etc. PWR-type SMR experiments were performed in this well-scaled test facility to investigate the potential thermal hydraulic flow instability during the blowdown events, which might occur during the loss of coolant accident (LOCA) and loss of heat sink accident (LOHS) of the prototype PWR-type SMR. Two kinds of experiments, normal blowdown event and cold blowdown event, were experimentally investigated and compared with code predictions. The normal blowdown event was experimentally simulated since an initial condition where the pressure was lower than the designed pressure of the experiment facility, while the code prediction of blowdown started from the normal operation condition. Important thermal hydraulic parameters including reactor pressure vessel (RPV) pressure, containment pressure, local void fraction and temperature, pressure drop and natural circulation flow rate were measured and analyzed during the blowdown event. The pressure and water level transients are similar to the experimental results published by NuScale [51], which proves the capability of current loop in simulating the thermal hydraulic transient of real PWR-type SMR. During the 20000s blowdown experiment, water level in the core was always above the active fuel assemble during the experiment and proved the safety of natural circulation cooling and water recycling design of PWR-type SMR. Besides, pressure, temperature, and water level transient can be accurately predicted by RELAP5 code. However, the oscillations of natural circulation flow rate, water level and pressure drops were observed during the blowdown transients. This kind of flow oscillations are related to the water level and the location upper plenum, which is a path for coolant flow from chimney to steam generator and down comer. In order to investigate the transients start from the opening of ADS valve in both experimental and numerical way, the cold blow-down experiment is conducted. For the cold blowdown event, different from setting both reactor iv pressure vessel (RPV) and containment at high temperature and pressure, only RPV was heated close to the highest designed pressure and then open the ADS valve, same process was predicted using RELAP5 code. By doing cold blowdown experiment, the entire transients from the opening of ADS can be investigated by code and benchmarked with experimental data. Similar flow instability observed in the cold blowdown experiment. The comparison between code prediction and experiment data showed that the RELAP5 code can successfully predict the pressure void fraction and temperature transient during the cold blowdown event with limited error, but numerical instability exists in predicting natural circulation flow rate. Besides, the code is lack of capability in predicting the water level related flow instability observed in experiments.« less

  15. Comparison of the PLTEMP code flow instability predictions with measurements made with electrically heated channels for the advanced test reactor.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, E.

    When the University of Missouri Research Reactor (MURR) was designed in the 1960s the potential for fuel element burnout by a phenomenon referred to at that time as 'autocatalytic vapor binding' was of serious concern. This type of burnout was observed to occur at power levels considerably lower than those that were known to cause critical heat flux. The conversion of the MURR from HEU fuel to LEU fuel will probably require significant design changes, such as changes in coolant channel thicknesses, that could affect the thermal-hydraulic behavior of the reactor core. Therefore, the redesign of the MURR to accommodatemore » an LEU core must address the same issues of fuel element burnout that were of concern in the 1960s. The Advanced Test Reactor (ATR) was designed at about the same time as the MURR and had similar concerns with regard to fuel element burnout. These concerns were addressed in the ATR by two groups of thermal-hydraulic tests that employed electrically heated simulated fuel channels. The Croft (1964), Reference 1, tests were performed at ANL. The Waters (1966), Reference 2, tests were performed at Hanford Laboratories in Richland Washington. Since fuel element surface temperatures rise rapidly as burnout conditions are approached, channel surface temperatures were carefully monitored in these experiments. For self-protection, the experimental facilities were designed to cut off the electric power when rapidly increasing surface temperatures were detected. In both the ATR reactor and in the tests with electrically heated channels, the heated length of the fuel plate was 48 inches, which is about twice that of the MURR. Whittle and Forgan (1967) independently conducted tests with electrically heated rectangular channels that were similar to the tests by Croft and by Walters. In the Whittle and Forgan tests the heated length of the channel varied among the tests and was between 16 and 24 inches. Both Waters and Whittle and Forgan show that the cause of the fuel element burnout is due to a form of flow instability. Whittle and Forgan provide a formula that predicts when this flow instability will occur. This formula is included in the PLTEMP/ANL code.Error! Reference source not found. Olson has shown that the PLTEMP/ANL code accurately predicts the powers at which flow instability occurs in the Whittle and Forgan experiments. He also considered the electrically heated tests performed in the ANS Thermal-Hydraulic Test Loop at ORNL and report by M. Siman-Tov et al. The purpose of this memorandum is to demonstrate that the PLTEMP/ANL code accurately predicts the Croft and the Waters tests. This demonstration should provide sufficient confidence that the PLTEMP/ANL code can adequately predict the onset of flow instability for the converted MURR. The MURR core uses light water as a coolant, has a 24-inch active fuel length, downward flow in the core, and an average core velocity of about 7 m/s. The inlet temperature is about 50 C and the peak outlet is about 20 C higher than the inlet for reactor operation at 10 MW. The core pressures range from about 4 to about 5 bar. The peak heat flux is about 110 W/cm{sup 2}. Section 2 describes the mechanism that causes flow instability. Section 3 describes the Whittle and Forgan formula for flow instability. Section 4 briefly describes both the Croft and the Waters experiments. Section 5 describes the PLTEMP/ANL models. Section 6 compares the PLTEMP/ANL predictions based on the Whittle and Forgan formula with the Croft measurements. Section 7 does the same for the Waters measurements. Section 8 provides the range of parameters for the Whittle and Forgan tests. Section 9 discusses the results and provides conclusions. In conclusion, although there is no single test that by itself closely matches the limiting conditions in the MURR, the preponderance of measured data and the ability of the Whittle and Forgan correlation, as implemented in PLTEMP/ANL, to predict the onset of flow instability for these tests leads one to the conclusion that the same method should be able to predict the onset of flow instability in the MURR reasonably well.« less

  16. Transient Ejector Analysis (TEA) code user's guide

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1993-01-01

    A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.

  17. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  18. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  19. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1997-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic responses of axial-flow turbo-machinery blading.The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to a far-field eigenanalysis, is also described. The linearized aerodynamic and numerical models have been implemented into a three-dimensional linearized unsteady flow code, called LINFLUX. This code has been applied to selected, benchmark, unsteady, subsonic flows to establish its accuracy and to demonstrate its current capabilities. The unsteady flows considered, have been chosen to allow convenient comparisons between the LINFLUX results and those of well-known, two-dimensional, unsteady flow codes. Detailed numerical results for a helical fan and a three-dimensional version of the 10th Standard Cascade indicate that important progress has been made towards the development of a reliable and useful, three-dimensional, prediction capability that can be used in aeroelastic and aeroacoustic design studies.

  20. Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maggiora, R.; Milanesio, D.; Vecchi, G.

    2009-11-26

    TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less

  1. Computational Analysis of a Low-Boom Supersonic Inlet

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.

    2011-01-01

    A low-boom supersonic inlet was designed for use on a conceptual small supersonic aircraft that would cruise with an over-wing Mach number of 1.7. The inlet was designed to minimize external overpressures, and used a novel bypass duct to divert the highest shock losses around the engine. The Wind-US CFD code was used to predict the effects of capture ratio, struts, bypass design, and angles of attack on inlet performance. The inlet was tested in the 8-ft by 6-ft Supersonic Wind Tunnel at NASA Glenn Research Center. Test results showed that the inlet had excellent performance, with capture ratios near one, a peak core total pressure recovery of 96 percent, and a stable operating range much larger than that of an engine. Predictions generally compared very well with the experimental data, and were used to help interpret some of the experimental results.

  2. The Use of Automated SNOMED CT Clinical Coding in Clinical Decision Support Systems for Preventive Care.

    PubMed

    Al-Hablani, Bader

    2017-01-01

    The objective of this study is to discuss and analyze the use of automated SNOMED CT clinical coding in clinical decision support systems (CDSSs) for preventive care. The central question that this study seeks to answer is whether the utilization of SNOMED CT in CDSSs can improve preventive care. PubMed, Google Scholar, and Cochrane Library were searched for articles published in English between 2001 and 2012 on SNOMED CT, CDSS, and preventive care. Outcome measures were the sensitivity or specificity of SNOMED CT coded data and the positive predictive value or negative predictive value of SNOMED CT coded data. Additionally, we documented the publication year, research question, study design, results, and conclusions of these studies. The reviewed studies suggested that SNOMED CT successfully represents clinical terms and negated clinical terms. The use of SNOMED CT in CDSS can be considered to provide an answer to the problem of medical errors as well as for preventive care in general. Enhancement of the modifiers and synonyms found in SNOMED CT will be necessary to improve the expected outcome of the integration of SNOMED CT with CDSS. Moreover, the application of the tree-augmented naïve (TAN) Bayesian network method can be considered the best technique to search SNOMED CT data and, consequently, to help improve preventive health services.

  3. The Use of Automated SNOMED CT Clinical Coding in Clinical Decision Support Systems for Preventive Care

    PubMed Central

    Al-Hablani, Bader

    2017-01-01

    Objective The objective of this study is to discuss and analyze the use of automated SNOMED CT clinical coding in clinical decision support systems (CDSSs) for preventive care. The central question that this study seeks to answer is whether the utilization of SNOMED CT in CDSSs can improve preventive care. Method PubMed, Google Scholar, and Cochrane Library were searched for articles published in English between 2001 and 2012 on SNOMED CT, CDSS, and preventive care. Outcome Measures Outcome measures were the sensitivity or specificity of SNOMED CT coded data and the positive predictive value or negative predictive value of SNOMED CT coded data. Additionally, we documented the publication year, research question, study design, results, and conclusions of these studies. Results The reviewed studies suggested that SNOMED CT successfully represents clinical terms and negated clinical terms. Conclusion The use of SNOMED CT in CDSS can be considered to provide an answer to the problem of medical errors as well as for preventive care in general. Enhancement of the modifiers and synonyms found in SNOMED CT will be necessary to improve the expected outcome of the integration of SNOMED CT with CDSS. Moreover, the application of the tree-augmented naïve (TAN) Bayesian network method can be considered the best technique to search SNOMED CT data and, consequently, to help improve preventive health services. PMID:28566995

  4. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    NASA Astrophysics Data System (ADS)

    Karahan, Aydın; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO2-PuO2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors. FEAST-METAL was benchmarked against the open-literature EBR-II database for steady state and furnace tests (transients). The results show that the code is able to predict important phenomena such as clad strain, fission gas release, clad wastage, clad failure time, axial fuel slug deformation and fuel constituent redistribution, satisfactorily.

  5. Experimental Aerodynamic Characteristics of the Pegasus Air-Launched Booster and Comparisons with Predicted and Flight Results

    NASA Technical Reports Server (NTRS)

    Rhode, M. N.; Engelund, Walter C.; Mendenhall, Michael R.

    1995-01-01

    Experimental longitudinal and lateral-directional aerodynamic characteristics were obtained for the Pegasus and Pegasus XL configurations over a Mach number range from 1.6 to 6 and angles of attack from -4 to +24 degrees. Angle of sideslip was varied from -6 to +6 degrees, and control surfaces were deflected to obtain elevon, aileron, and rudder effectiveness. Experimental data for the Pegasus configuration are compared with engineering code predictions performed by Nielsen Engineering & Research, Inc. (NEAR) in the aerodynamic design of the Pegasus vehicle, and with results from the Aerodynamic Preliminary Analysis System (APAS) code. Comparisons of experimental results are also made with longitudinal flight data from Flight #2 of the Pegasus vehicle. Results show that the longitudinal aerodynamic characteristics of the Pegasus and Pegasus XL configurations are similar, having the same lift-curve slope and drag levels across the Mach number range. Both configurations are longitudinally stable, with stability decreasing towards neutral levels as Mach number increases. Directional stability is negative at moderate to high angles of attack due to separated flow over the vertical tail. Dihedral effect is positive for both configurations, but is reduced 30-50 percent for the Pegasus XL configuration because of the horizontal tail anhedral. Predicted longitudinal characteristics and both longitudinal and lateral-directional control effectiveness are generally in good agreement with experiment. Due to the complex leeside flowfield, lateral-directional characteristics are not as well predicted by the engineering codes. Experiment and flight data are in good agreement across the Mach number range.

  6. Langley Stability and Transition Analysis Code (LASTRAC) Version 1.2 User Manual

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    LASTRAC is a general-purposed, physics-based transition prediction code released by NASA for Laminar Flow Control studies and transition research. The design and development of the LASTRAC code is aimed at providing an engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. It was written from scratch based on the state-of-the-art numerical methods for stability analysis and modern software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory or linear parabolized stability equations method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. This document describes the governing equations, numerical methods, code development, detailed description of input/output parameters, and case studies for the current release of LASTRAC.

  7. Calculation and Correlation of the Unsteady Flowfield in a High Pressure Turbine

    NASA Technical Reports Server (NTRS)

    Bakhle, Milind A.; Liu, Jong S.; Panovsky, Josef; Keith, Theo G., Jr.; Mehmed, Oral

    2002-01-01

    Forced vibrations in turbomachinery components can cause blades to crack or fail due to high-cycle fatigue. Such forced response problems will become more pronounced in newer engines with higher pressure ratios and smaller axial gap between blade rows. An accurate numerical prediction of the unsteady aerodynamics phenomena that cause resonant forced vibrations is increasingly important to designers. Validation of the computational fluid dynamics (CFD) codes used to model the unsteady aerodynamic excitations is necessary before these codes can be used with confidence. Recently published benchmark data, including unsteady pressures and vibratory strains, for a high-pressure turbine stage makes such code validation possible. In the present work, a three dimensional, unsteady, multi blade-row, Reynolds-Averaged Navier Stokes code is applied to a turbine stage that was recently tested in a short duration test facility. Two configurations with three operating conditions corresponding to modes 2, 3, and 4 crossings on the Campbell diagram are analyzed. Unsteady pressures on the rotor surface are compared with data.

  8. Aeronautical audio broadcasting via satellite

    NASA Technical Reports Server (NTRS)

    Tzeng, Forrest F.

    1993-01-01

    A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.

  9. An Experimental Evaluation of Advanced Rotorcraft Airfoils in the NASA Ames Eleven-foot Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Flemming, Robert J.

    1984-01-01

    Five full scale rotorcraft airfoils were tested in the NASA Ames Eleven-Foot Transonic Wind Tunnel for full scale Reynolds numbers at Mach numbers from 0.3 to 1.07. The models, which spanned the tunnel from floor to ceiling, included two modern baseline airfoils, the SC1095 and SC1094 R8, which have been previously tested in other facilities. Three advanced transonic airfoils, designated the SSC-A09, SSC-A07, and SSC-B08, were tested to confirm predicted performance and provide confirmation of advanced airfoil design methods. The test showed that the eleven-foot tunnel is suited to two-dimensional airfoil testing. Maximum lift coefficients, drag coefficients, pitching moments, and pressure coefficient distributions are presented. The airfoil analysis codes agreed well with the data, with the Grumman GRUMFOIL code giving the best overall performance correlation.

  10. Orion Parachute Riser Cutter Development

    NASA Technical Reports Server (NTRS)

    Oguz, Sirri; Salazar, Frank

    2011-01-01

    This paper presents the tests and analytical approach used on the development of a steel riser cutter for the CEV Parachute Assembly System (CPAS) used on the Orion crew module. Figure 1 shows the riser cutter and the steel riser bundle which consists of six individual cables. Due to the highly compressed schedule, initial unavailability of the riser material and the Orion Forward Bay mechanical constraints, JSC primarily relied on a combination of internal ballistics analysis and LS-DYNA simulation for this project. Various one dimensional internal ballistics codes that use standard equation of state and conservation of energy have commonly used in the development of CAD devices for initial first order estimates and as an enhancement to the test program. While these codes are very accurate for propellant performance prediction, they usually lack a fully defined kinematic model for dynamic predictions. A simple piston device can easily and accurately be modeled using an equation of motion. However, the accuracy of analytical models is greatly reduced on more complicated devices with complex external loads, nonlinear trajectories or unique unlocking features. A 3D finite element model of CAD device with all critical features included can vastly improve the analytical ballistic predictions when it is used as a supplement to the ballistic code. During this project, LS-DYNA structural 3D model was used to predict the riser resisting load that was needed for the ballistic code. A Lagrangian model with eroding elements shown in Figure 2 was used for the blade, steel riser and the anvil. The riser material failure strain was fine tuned by matching the dent depth on the anvil with the actual test data. LS-DYNA model was also utilized to optimize the blade tip design for the most efficient cut. In parallel, the propellant type and the amount were determined by using CADPROG internal ballistics code. Initial test results showed a good match with LS-DYNA and CADPROG simulations. Final paper will present a detailed roadmap from initial ballistic modeling and LS-DYNA simulation to the performance testing. Blade shape optimization study will also be presented.

  11. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  12. Use of the International Classification of Diseases, 9th revision, coding in identifying chronic hepatitis B virus infection in health system data: implications for national surveillance.

    PubMed

    Mahajan, Reena; Moorman, Anne C; Liu, Stephen J; Rupp, Loralee; Klevens, R Monina

    2013-05-01

    With increasing use electronic health records (EHR) in the USA, we looked at the predictive values of the International Classification of Diseases, 9th revision (ICD-9) coding system for surveillance of chronic hepatitis B virus (HBV) infection. The chronic HBV cohort from the Chronic Hepatitis Cohort Study was created based on electronic health records (EHR) of adult patients who accessed services from 2006 to 2008 from four healthcare systems in the USA. Using the gold standard of abstractor review to confirm HBV cases, we calculated the sensitivity, specificity, positive and negative predictive values using one qualifying ICD-9 code versus using two qualifying ICD-9 codes separated by 6 months or greater. Of 1 652 055 adult patients, 2202 (0.1%) were confirmed as having chronic HBV. Use of one ICD-9 code had a sensitivity of 83.9%, positive predictive value of 61.0%, and specificity and negative predictive values greater than 99%. Use of two hepatitis B-specific ICD-9 codes resulted in a sensitivity of 58.4% and a positive predictive value of 89.9%. Use of one or two hepatitis B ICD-9 codes can identify cases with chronic HBV infection with varying sensitivity and positive predictive values. As the USA increases the use of EHR, surveillance using ICD-9 codes may be reliable to determine the burden of chronic HBV infection and would be useful to improve reporting by state and local health departments.

  13. Impact of Nuclear Data Uncertainties on Calculated Spent Fuel Nuclide Inventories and Advanced NDA Instrument Response

    DOE PAGES

    Hu, Jianwei; Gauld, Ian C.

    2014-12-01

    The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less

  14. Impact of Nuclear Data Uncertainties on Calculated Spent Fuel Nuclide Inventories and Advanced NDA Instrument Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianwei; Gauld, Ian C.

    The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less

  15. Performance optimization for rotors in hover and axial flight

    NASA Technical Reports Server (NTRS)

    Quackenbush, T. R.; Wachspress, D. A.; Kaufman, A. E.; Bliss, D. B.

    1989-01-01

    Performance optimization for rotors in hover and axial flight is a topic of continuing importance to rotorcraft designers. The aim of this Phase 1 effort has been to demonstrate that a linear optimization algorithm could be coupled to an existing influence coefficient hover performance code. This code, dubbed EHPIC (Evaluation of Hover Performance using Influence Coefficients), uses a quasi-linear wake relaxation to solve for the rotor performance. The coupling was accomplished by expanding of the matrix of linearized influence coefficients in EHPIC to accommodate design variables and deriving new coefficients for linearized equations governing perturbations in power and thrust. These coefficients formed the input to a linear optimization analysis, which used the flow tangency conditions on the blade and in the wake to impose equality constraints on the expanded system of equations; user-specified inequality contraints were also employed to bound the changes in the design. It was found that this locally linearized analysis could be invoked to predict a design change that would produce a reduction in the power required by the rotor at constant thrust. Thus, an efficient search for improved versions of the baseline design can be carried out while retaining the accuracy inherent in a free wake/lifting surface performance analysis.

  16. Comparison of jet plume shape predictions and plume influence on sonic boom signature

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Melson, N. Duane

    1992-01-01

    An Euler shock-fitting marching code yields good agreement with semiempirically determined plume shapes, although the agreement decreases somewhat with increasing nozzle angle and the attendant increase in the nonisentropic nature of the flow. Some calculations for the low boom configuration with a simple engine indicated that, for flight at altitudes above 60,000 feet, the plume effect is dominant. This negates the advantages of a low boom design. At lower altitudes, plume effects are significant, but of the order that can be incorporated into the low boom design process.

  17. Data integration of structured and unstructured sources for assigning clinical codes to patient stays

    PubMed Central

    Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim

    2016-01-01

    Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458

  18. 13 CFR 121.1103 - What are the procedures for appealing a NAICS code designation?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... appealing a NAICS code designation? 121.1103 Section 121.1103 Business Credit and Assistance SMALL BUSINESS... Determinations and Naics Code Designations § 121.1103 What are the procedures for appealing a NAICS code... code designation and applicable size standard must be served and filed within 10 calendar days after...

  19. Advance Noise Control Fan II: Test Rig Fan Risk Management Study

    NASA Technical Reports Server (NTRS)

    Lucero, John

    2013-01-01

    Since 1995 the Advanced Noise Control Fan (ANCF) has significantly contributed to the advancement of the understanding of the physics of fan tonal noise generation. The 9'x15' WT has successfully tested multiple high speed fan designs over the last several decades. This advanced several tone noise reduction concepts to higher TRL and the validation of fan tone noise prediction codes.

  20. Simulations of a Liquid Hydrogen Inducer at Low-Flow Off-Design Flow Conditions

    NASA Technical Reports Server (NTRS)

    Hosangadi, A.; Ahuja, V.; Ungewitter, R. J.

    2005-01-01

    The ability to accurately model details of inlet back flow for inducers operating a t low-flow, off-design conditions is evaluated. A sub-scale version of a three-bladed liquid hydrogen inducer tested in water with detailed velocity and pressure measurements is used as a numerical test bed. Under low-flow, off-design conditions the length of the separation zone as well as the swirl velocity magnitude was under predicted with a standard k-E model. When the turbulent viscosity coefficient was reduced good comparison was obtained a t all the flow conditions examined with both the magnitude and shape of the profile matching well with the experimental data taken half a diameter upstream of the leading edge. The velocity profiles and incidence angles a t the leading edge itself were less sensitive to the back flow length predictions indicating that single-phase performance predictions may be well predicted even if the details of flow separation modeled are incorrect. However, for cavitating flow situations the prediction of the correct swirl in the back flow and the pressure depression in the core becomes critical since it leads to vapor formation. The simulations have been performed using the CRUNCH CFD(Registered Trademark) code that has a generalized multi-element unstructured framework and a n advanced multi-phase formulation for cryogenic fluids. The framework has been validated rigorously for predictions of temperature and pressure depression in cryogenic fluid cavities and has also been shown to predict the cavitation breakdown point for inducers a t design conditions.

  1. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  2. Integrated Predictive Tools for Customizing Microstructure and Material Properties of Additively Manufactured Aerospace Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Fattebert, Jean-Luc; Gorti, Sarma B.

    Additive Manufacturing (AM) refers to a process by which digital three-dimensional (3-D) design data is converted to build up a component by depositing material layer-by-layer. United Technologies Corporation (UTC) is currently involved in fabrication and certification of several AM aerospace structural components made from aerospace materials. This is accomplished by using optimized process parameters determined through numerous design-of-experiments (DOE)-based studies. Certification of these components is broadly recognized as a significant challenge, with long lead times, very expensive new product development cycles and very high energy consumption. Because of these challenges, United Technologies Research Center (UTRC), together with UTC business unitsmore » have been developing and validating an advanced physics-based process model. The specific goal is to develop a physics-based framework of an AM process and reliably predict fatigue properties of built-up structures as based on detailed solidification microstructures. Microstructures are predicted using process control parameters including energy source power, scan velocity, deposition pattern, and powder properties. The multi-scale multi-physics model requires solution and coupling of governing physics that will allow prediction of the thermal field and enable solution at the microstructural scale. The state-of-the-art approach to solve these problems requires a huge computational framework and this kind of resource is only available within academia and national laboratories. The project utilized the parallel phase-fields codes at Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL), along with the high-performance computing (HPC) capabilities existing at the two labs to demonstrate the simulation of multiple dendrite growth in threedimensions (3-D). The LLNL code AMPE was used to implement the UTRC phase field model that was previously developed for a model binary alloy, and the simulation results were compared against the UTRC simulation results, followed by extension of the UTRC model to simulate multiple dendrite growth in 3-D. The ORNL MEUMAPPS code was used to simulate dendritic growth in a model ternary alloy with the same equilibrium solidification range as the Ni-base alloy 718 using realistic model parameters, including thermodynamic integration with a Calphad based model for the ternary alloy. Implementation of the UTRC model in AMPE met with several numerical and parametric issues that were resolved and good comparison between the simulation results obtained by the two codes was demonstrated for two dimensional (2-D) dendrites. 3-D dendrite growth was then demonstrated with the AMPE code using nondimensional parameters obtained in 2-D simulations. Multiple dendrite growth in 2-D and 3-D were demonstrated using ORNL’s MEUMAPPS code using simple thermal boundary conditions. MEUMAPPS was then modified to incorporate the complex, time-dependent thermal boundary conditions obtained by UTRC’s thermal modeling of single track AM experiments to drive the phase field simulations. The results were in good agreement with UTRC’s experimental measurements.« less

  3. Propellant Chemistry for CFD Applications

    NASA Technical Reports Server (NTRS)

    Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.

    1996-01-01

    Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.

  4. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  5. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  6. Modeling of thermo-mechanical and irradiation behavior of mixed oxide fuel for sodium fast reactors

    NASA Astrophysics Data System (ADS)

    Karahan, Aydın; Buongiorno, Jacopo

    2010-01-01

    An engineering code to model the irradiation behavior of UO2-PuO2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named fuel engineering and structural analysis tool (FEAST-OXIDE). FEAST-OXIDE has several modules working in coupled form with an explicit numerical algorithm. These modules describe: (1) fission gas release and swelling, (2) fuel chemistry and restructuring, (3) temperature distribution, (4) fuel-clad chemical interaction and (5) fuel-clad mechanical analysis. Given the fuel pin geometry, composition and irradiation history, FEAST-OXIDE can analyze fuel and cladding thermo-mechanical behavior at both steady-state and design-basis transient scenarios. The code was written in FORTRAN-90 program language. The mechanical analysis module implements the LIFE algorithm. Fission gas release and swelling behavior is described by the OGRES and NEFIG models. However, the original OGRES model has been extended to include the effects of joint oxide gain (JOG) formation on fission gas release and swelling. A detailed fuel chemistry model has been included to describe the cesium radial migration and JOG formation, oxygen and plutonium radial distribution and the axial migration of cesium. The fuel restructuring model includes the effects of as-fabricated porosity migration, irradiation-induced fuel densification, grain growth, hot pressing and fuel cracking and relocation. Finally, a kinetics model is included to predict the clad wastage formation. FEAST-OXIDE predictions have been compared to the available FFTF, EBR-II and JOYO databases, as well as the LIFE-4 code predictions. The agreement was found to be satisfactory for steady-state and slow-ramp over-power accidents.

  7. Accuracy of Diagnosis Codes to Identify Febrile Young Infants Using Administrative Data

    PubMed Central

    Aronson, Paul L.; Williams, Derek J.; Thurm, Cary; Tieder, Joel S.; Alpern, Elizabeth R.; Nigrovic, Lise E.; Schondelmeyer, Amanda C.; Balamuth, Fran; Myers, Angela L.; McCulloh, Russell J.; Alessandrini, Evaline A.; Shah, Samir S.; Browning, Whitney L.; Hayes, Katie L.; Feldman, Elana A.; Neuman, Mark I.

    2015-01-01

    Background Administrative data can be used to determine optimal management of febrile infants and aid clinical practice guideline development. Objective Determine the most accurate International Classification of Diseases, 9th revision (ICD-9) diagnosis coding strategies for identification of febrile infants. Design Retrospective cross-sectional study. Setting Eight emergency departments in the Pediatric Health Information System. Patients Infants age < 90 days evaluated between July 1, 2012 and June 30, 2013 were randomly selected for medical record review from one of four ICD-9 diagnosis code groups: 1) discharge diagnosis of fever, 2) admission diagnosis of fever without discharge diagnosis of fever, 3) discharge diagnosis of serious infection without diagnosis of fever, and 4) no diagnosis of fever or serious infection. Exposure The ICD-9 diagnosis code groups were compared in four case-identification algorithms to a reference standard of fever ≥ 100.4°F documented in the medical record. Measurements Algorithm predictive accuracy was measured using sensitivity, specificity, negative and positive predictive values. Results Among 1790 medical records reviewed, 766 (42.8%) infants had fever. Discharge diagnosis of fever demonstrated high specificity (98.2%, 95% confidence interval [CI]: 97.8-98.6) but low sensitivity (53.2%, 95% CI: 50.0-56.4). A case-identification algorithm of admission or discharge diagnosis of fever exhibited higher sensitivity (71.1%, 95% CI: 68.2-74.0), similar specificity (97.7%, 95% CI: 97.3-98.1), and the highest positive predictive value (86.9%, 95% CI: 84.5-89.3). Conclusions A case-identification strategy that includes admission or discharge diagnosis of fever should be considered for febrile infant studies using administrative data, though under-classification of patients is a potential limitation. PMID:26248691

  8. When Content Matters: The Role of Processing Code in Tactile Display Design.

    PubMed

    Ferris, Thomas K; Sarter, Nadine

    2010-01-01

    The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.

  9. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as part of their continuing efforts to support the users of the FLUKA code within the particle physics community. In keeping with the spirit of developing an evolving physics code, we are planning as part of this project, to participate in the efforts to validate the core FLUKA physics in ground-based accelerator test runs. The emphasis of these test runs will be the physics of greatest interest in the simulation of the space radiation environment. Such a tool will be of great value to planners, designers and operators of future space missions, as well as for the design of the vehicles and habitats to be used on such missions. It will also be of aid to future experiments of various kinds that may be affected at some level by the ambient radiation environment, or in the analysis of hybrid experiment designs that have been discussed for space-based astronomy and astrophysics. The tool will be of value to the Life Sciences personnel involved in the prediction and measurement of radiation doses experienced by the crewmembers on such missions. In addition, the tool will be of great use to the planners of experiments to measure and evaluate the space radiation environment itself. It can likewise be useful in the analysis of safe havens, hazard migration plans, and NASA's call for new research in composites and to NASA engineers modeling the radiation exposure of electronic circuits. This code will provide an important complimentary check on the predictions of analytic codes such as BRYNTRN/HZETRN that are presently used for many similar applications, and which have shortcomings that are more easily overcome with Monte Carlo type simulations. Finally, it is acknowledged that there are similar efforts based around the use of the GEANT4 Monte-Carlo transport code currently under development at CERN. It is our intention to make our software modular and sufficiently flexible to allow the parallel use of either FLUKA or GEANT4 as the physics transport engine.

  10. Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.

    System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less

  11. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  12. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  13. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  14. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  15. Fast methods to numerically integrate the Reynolds equation for gas fluid films

    NASA Technical Reports Server (NTRS)

    Dimofte, Florin

    1992-01-01

    The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.

  16. Thermal and orbital analysis of Earth monitoring Sun-synchronous space experiments

    NASA Technical Reports Server (NTRS)

    Killough, Brian D.

    1990-01-01

    The fundamentals of an Earth monitoring Sun-synchronous orbit are presented. A Sun-synchronous Orbit Analysis Program (SOAP) was developed to calculate orbital parameters for an entire year. The output from this program provides the required input data for the TRASYS thermal radiation computer code, which in turn computes the infrared, solar and Earth albedo heat fluxes incident on a space experiment. Direct incident heat fluxes can be used as input to a generalized thermal analyzer program to size radiators and predict instrument operating temperatures. The SOAP computer code and its application to the thermal analysis methodology presented, should prove useful to the thermal engineer during the design phases of Earth monitoring Sun-synchronous space experiments.

  17. Bayesian decision support for coding occupational injury data.

    PubMed

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  18. Sensitivity analysis of tall buildings in Semarang, Indonesia due to fault earthquakes with maximum 7 Mw

    NASA Astrophysics Data System (ADS)

    Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian

    2017-11-01

    Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.

  19. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  20. Weighted bi-prediction for light field image coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2017-09-01

    Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.

  1. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  2. Enhancement and Extension of Porosity Model in the FDNS-500 Code to Provide Enhanced Simulations of Rocket Engine Components

    NASA Technical Reports Server (NTRS)

    Cheng, Gary

    2003-01-01

    In the past, the design of rocket engines has primarily relied on the cold flow/hot fire test, and the empirical correlations developed based on the database from previous designs. However, it is very costly to fabricate and test various hardware designs during the design cycle, whereas the empirical model becomes unreliable in designing the advanced rocket engine where its operating conditions exceed the range of the database. The main goal of the 2nd Generation Reusable Launching Vehicle (GEN-II RLV) is to reduce the cost per payload and to extend the life of the hardware, which poses a great challenge to the rocket engine design. Hence, understanding the flow characteristics in each engine components is thus critical to the engine design. In the last few decades, the methodology of computational fluid dynamics (CFD) has been advanced to be a mature tool of analyzing various engine components. Therefore, it is important for the CFD design tool to be able to properly simulate the hot flow environment near the liquid injector, and thus to accurately predict the heat load to the injector faceplate. However, to date it is still not feasible to conduct CFD simulations of the detailed flowfield with very complicated geometries such as fluid flow and heat transfer in an injector assembly and through a porous plate, which requires gigantic computer memories and power to resolve the detailed geometry. The rigimesh (a sintered metal material), utilized to reduce the heat load to the faceplate, is one of the design concepts for the injector faceplate of the GEN-II RLV. In addition, the injector assembly is designed to distribute propellants into the combustion chamber of the liquid rocket engine. A porosity mode thus becomes a necessity for the CFD code in order to efficiently simulate the flow and heat transfer in these porous media, and maintain good accuracy in describing the flow fields. Currently, the FDNS (Finite Difference Navier-Stakes) code is one of the CFD codes which are most widely used by research engineers at NASA Marshall Space Flight Center (MSFC) to simulate various flow problems related to rocket engines. The objective of this research work during the 10-week summer faculty fellowship program was to 1) debug the framework of the porosity model in the current FDNS code, and 2) validate the porosity model by simulating flows through various porous media such as tube banks and porous plate.

  3. National Underground Mines Inventory

    DTIC Science & Technology

    1983-10-01

    system is well designed to minimize water accumulation on the drift levels. In many areas, sufficient water has accumulated to make the use of boots a...four characters designate Field office. 17-18 State Code Pic 99 FIPS code for state in which minets located. 19-21 County Code Plc 999 FIPS code for... Designate a general product class based onSIC code. 28-29 Nine Type Plc 99 Natal/Nonmetal mine type code. Based on subunit operations code and canvass code

  4. Role of N-Methyl-D-Aspartate Receptors in Action-Based Predictive Coding Deficits in Schizophrenia.

    PubMed

    Kort, Naomi S; Ford, Judith M; Roach, Brian J; Gunduz-Bruce, Handan; Krystal, John H; Jaeger, Judith; Reinhart, Robert M G; Mathalon, Daniel H

    2017-03-15

    Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia. In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared. N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen's d = 1.14) and schizophrenia (Cohen's d = .85). Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  5. Cosmic-ray propagation with DRAGON2: I. numerical solver and astrophysical ingredients

    NASA Astrophysics Data System (ADS)

    Evoli, Carmelo; Gaggero, Daniele; Vittino, Andrea; Di Bernardo, Giuseppe; Di Mauro, Mattia; Ligorini, Arianna; Ullio, Piero; Grasso, Dario

    2017-02-01

    We present version 2 of the DRAGON code designed for computing realistic predictions of the CR densities in the Galaxy. The code numerically solves the interstellar CR transport equation (including inhomogeneous and anisotropic diffusion, either in space and momentum, advective transport and energy losses), under realistic conditions. The new version includes an updated numerical solver and several models for the astrophysical ingredients involved in the transport equation. Improvements in the accuracy of the numerical solution are proved against analytical solutions and in reference diffusion scenarios. The novel features implemented in the code allow to simulate the diverse scenarios proposed to reproduce the most recent measurements of local and diffuse CR fluxes, going beyond the limitations of the homogeneous galactic transport paradigm. To this end, several applications using DRAGON2 are presented as well. This new version facilitates the users to include their own physical models by means of a modular C++ structure.

  6. Chemical reacting flows

    NASA Technical Reports Server (NTRS)

    Mularz, Edward J.; Sockol, Peter M.

    1987-01-01

    Future aerospace propulsion concepts involve the combination of liquid or gaseous fuels in a highly turbulent internal air stream. Accurate predictive computer codes which can simulate the fluid mechanics, chemistry, and turbulence combustion interaction of these chemical reacting flows will be a new tool that is needed in the design of these future propulsion concepts. Experimental and code development research is being performed at Lewis to better understand chemical reacting flows with the long term goal of establishing these reliable computer codes. The approach to understanding chemical reacting flows is to look at separate simple parts of this complex phenomena as well as to study the full turbulent reacting flow process. As a result research on the fluid mechanics associated with chemical reacting flows was initiated. The chemistry of fuel-air combustion is also being studied. Finally, the phenomena of turbulence-combustion interaction is being investigated. This presentation will highlight research, both experimental and analytical, in each of these three major areas.

  7. Chemical reacting flows

    NASA Technical Reports Server (NTRS)

    Mularz, Edward J.; Sockol, Peter M.

    1990-01-01

    Future aerospace propulsion concepts involve the combustion of liquid or gaseous fuels in a highly turbulent internal airstream. Accurate predictive computer codes which can simulate the fluid mechanics, chemistry, and turbulence-combustion interaction of these chemical reacting flows will be a new tool that is needed in the design of these future propulsion concepts. Experimental and code development research is being performed at LeRC to better understand chemical reacting flows with the long-term goal of establishing these reliable computer codes. Our approach to understand chemical reacting flows is to look at separate, more simple parts of this complex phenomenon as well as to study the full turbulent reacting flow process. As a result, we are engaged in research on the fluid mechanics associated with chemical reacting flows. We are also studying the chemistry of fuel-air combustion. Finally, we are investigating the phenomenon of turbulence-combustion interaction. Research, both experimental and analytical, is highlighted in each of these three major areas.

  8. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  9. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  10. Chemical nonequilibrium Navier-Stokes solutions for hypersonic flow over an ablating graphite nosetip

    NASA Technical Reports Server (NTRS)

    Chen, Y. K.; Henline, W. D.

    1993-01-01

    The general boundary conditions including mass and energy balances of chemically equilibrated or nonequilibrated gas adjacent to ablating surfaces have been derived. A computer procedure based on these conditions was developed and interfaced with the Navier-Stokes solver for predictions of the flow field, surface temperature, and surface ablation rates over re-entry space vehicles with ablating Thermal Protection Systems (TPS). The Navier-Stokes solver with general surface thermochemistry boundary conditions can predict more realistic solutions and provide useful information for the design of TPS. A test case with a proposed hypersonic test vehicle configuration and associated free stream conditions was developed. Solutions with various surface boundary conditions were obtained, and the effect of nonequilibrium gas as well as surface chemistry on surface heating and ablation rate were examined. The solutions of the GASP code with complete ablating surface conditions were compared with those of the ASC code. The direction of future work is also discussed.

  11. A Fatigue Life Prediction Model of Welded Joints under Combined Cyclic Loading

    NASA Astrophysics Data System (ADS)

    Goes, Keurrie C.; Camarao, Arnaldo F.; Pereira, Marcos Venicius S.; Ferreira Batalha, Gilmar

    2011-01-01

    A practical and robust methodology is developed to evaluate the fatigue life in seam welded joints when subjected to combined cyclic loading. The fatigue analysis was conducted in virtual environment. The FE stress results from each loading were imported to fatigue code FE-Fatigue and combined to perform the fatigue life prediction using the S x N (stress x life) method. The measurement or modelling of the residual stresses resulting from the welded process is not part of this work. However, the thermal and metallurgical effects, such as distortions and residual stresses, were considered indirectly through fatigue curves corrections in the samples investigated. A tube-plate specimen was submitted to combined cyclic loading (bending and torsion) with constant amplitude. The virtual durability analysis result was calibrated based on these laboratory tests and design codes such as BS7608 and Eurocode 3. The feasibility and application of the proposed numerical-experimental methodology and contributions for the technical development are discussed. Major challenges associated with this modelling and improvement proposals are finally presented.

  12. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  13. A Nonlinear Model for Fuel Atomization in Spray Combustion

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave

    2003-01-01

    Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.

  14. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  15. Numerical and Analytical Solutions of Hypersonic Interactions Involving Surface Property Discontinuities

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Inger, George R.

    1999-01-01

    The local viscous-inviscid interaction field generated by a wall temperature jump on a flat plate in supersonic flow and on the windside of a Reusable Launch Vehicle in hypersonic flow is studied in detail by both a Navier-Stokes numerical code and an analytical triple-deck model. Treatment of the rapid heat transfer changes both upstream and downstream of the jump is included. Closed form relationships derived from the triple-deck theory are presented. The analytically predicted pressure and heating variations including upstream influence are found to be in generally good agreement with the Computational Fluid Dynamic (CFD) predictions. These analyses not only clarify the interactive physics involved but also are useful in preliminary design of thermal protection systems and as an insertable module to improve CFD code efficiency when applied to such small-scale interaction problems. The analyses only require conditions at the wall and boundary-layer edge which are easily extracted from a baseline, constant wall temperature, CFD solution.

  16. Assessment of Reduced-Kinetics Mechanisms for Combustion of Jet Fuel in CFD Applications

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Kundu, Krihna P.; Yungster, Shaye J.

    2014-01-01

    A computational effort was undertaken to analyze the details of fluid flow in Lean-Direct Injection (LDI) combustors for next-generation LDI design. The National Combustor Code (NCC) was used to perform reacting flow computations on single-element LDI injector configurations. The feasibility of using a reduced chemical-kinetics approach, which optimizes the reaction rates and species to model the emissions characteristics typical of lean-burning gas-turbine combustors, was assessed. The assessments were performed with Reynolds- Averaged Navier-Stokes (RANS) and Time-Filtered Navier Stokes (TFNS) time-integration, with a Lagrangian spray model with the NCC code. The NCC predictions for EINOx and combustor exit temperature were compared with experimental data for two different single-element LDI injector configurations, with 60deg and 45deg axially swept swirler vanes. The effects of turbulence-chemistry interaction on the predicted flow in a typical LDI combustor were studied with detailed comparisons of NCC TFNS with experimental data.

  17. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  18. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  19. Design and Testing of a Blended Wing Body With Boundary Layer Ingestion Nacelles at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Pendergraft, Odis C., Jr.; Friedman, Douglas M.; Serrano, Leonel

    2005-01-01

    A knowledge-based aerodynamic design method coupled with an unstructured grid Navier-Stokes flow solver was used to improve the propulsion/airframe integration for a Blended Wing Body with boundary-layer ingestion nacelles. A new zonal design capability was used that significantly reduced the time required to achieve a successful design for each nacelle and the elevon between them. A wind tunnel model was built with interchangeable parts reflecting the baseline and redesigned configurations and tested in the National Transonic Facility (NTF). Most of the testing was done at the cruise design conditions (Mach number = 0.85, Reynolds number = 75 million). In general, the predicted improvements in forces and moments as well as the changes in wing pressures between the baseline and redesign were confirmed by the wind tunnel results. The effectiveness of elevons between the nacelles was also predicted surprisingly well considering the crudeness in the modeling of the control surfaces in the flow code. A novel flow visualization technique involving pressure sensitive paint in the cryogenic nitrogen environment used in high-Reynolds number testing in the NTF was also investigated.

  20. Comparative analysis of design codes for timber bridges in Canada, the United States, and Europe

    Treesearch

    James Wacker; James (Scott) Groenier

    2010-01-01

    The United States recently completed its transition from the allowable stress design code to the load and resistance factor design (LRFD) reliability-based code for the design of most highway bridges. For an international perspective on the LRFD-based bridge codes, a comparative analysis is presented: a study addressed national codes of the United States, Canada, and...

Top