Sample records for high-speed computer program

  1. Computer program for analysis of high speed, single row, angular contact, spherical roller bearing, SASHBEAN. Volume 2: Mathematical formulation and analysis

    NASA Technical Reports Server (NTRS)

    Aggarwal, Arun K.

    1993-01-01

    Spherical roller bearings have typically been used in applications with speeds limited to about 5000 rpm and loads limited for operation at less than about 0.25 million DN. However, spherical roller bearings are now being designed for high load and high speed applications including aerospace applications. A computer program, SASHBEAN, was developed to provide an analytical tool to design, analyze, and predict the performance of high speed, single row, angular contact (including zero contact angle), spherical roller bearings. The material presented is the mathematical formulation and analytical methods used to develop computer program SASHBEAN. For a given set of operating conditions, the program calculates the bearings ring deflections (axial and radial), roller deflections, contact areas stresses, depth and magnitude of maximum shear stresses, axial thrust, rolling element and cage rotational speeds, lubrication parameters, fatigue lives, and rates of heat generation. Centrifugal forces and gyroscopic moments are fully considered. The program is also capable of performing steady-state and time-transient thermal analyses of the bearing system.

  2. Computer Analysis Of High-Speed Roller Bearings

    NASA Technical Reports Server (NTRS)

    Coe, H.

    1988-01-01

    High-speed cylindrical roller-bearing analysis program (CYBEAN) developed to compute behavior of cylindrical rolling-element bearings at high speeds and with misaligned shafts. With program, accurate assessment of geometry-induced roller preload possible for variety of out-ring and housing configurations and loading conditions. Enables detailed examination of bearing performance and permits exploration of causes and consequences of bearing skew. Provides general capability for assessment of designs of bearings supporting main shafts of engines. Written in FORTRAN IV.

  3. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  4. An analysis for high speed propeller-nacelle aerodynamic performance prediction. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.

    1988-01-01

    A user's manual for the computer program developed for the prediction of propeller-nacelle aerodynamic performance reported in, An Analysis for High Speed Propeller-Nacelle Aerodynamic Performance Prediction: Volume 1 -- Theory and Application, is presented. The manual describes the computer program mode of operation requirements, input structure, input data requirements and the program output. In addition, it provides the user with documentation of the internal program structure and the software used in the computer program as it relates to the theory presented in Volume 1. Sample input data setups are provided along with selected printout of the program output for one of the sample setups.

  5. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  6. A vectorization of the Hess McDonnell Douglas potential flow program NUED for the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Boney, L. R.; Smith, R. E., Jr.

    1979-01-01

    The computer program NUED for analyzing potential flow about arbitrary three dimensional lifting bodies using the panel method was modified to use vector operations and run on the STAR-100 computer. A high speed of computation and ability to approximate the body surface with a large number of panels are characteristics of NUEDV. The new program shows that vector operations can be readily implemented in programs of this type to increase the computational speed on the STAR-100 computer. The virtual memory architecture of the STAR-100 facilitates the use of large numbers of panels to approximate the body surface.

  7. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  8. Preliminary structural sizing of a Mach 3.0 high-speed civil transport model

    NASA Technical Reports Server (NTRS)

    Blackburn, Charles L.

    1992-01-01

    An analysis has been performed pertaining to the structural resizing of a candidate Mach 3.0 High Speed Civil Transport (HSCT) conceptual design using a computer program called EZDESIT. EZDESIT is a computer program which integrates the PATRAN finite element modeling program to the COMET finite element analysis program for the purpose of calculating element sizes or cross sectional dimensions. The purpose of the present report is to document the procedure used in accomplishing the preliminary structural sizing and to present the corresponding results.

  9. Aeronautics research and technology program and specific objectives

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Aeronautics research and technology program objectives in fluid and thermal physics, materials and structures, controls and guidance, human factors, multidisciplinary activities, computer science and applications, propulsion, rotorcraft, high speed aircraft, subsonic aircraft, and rotorcraft and high speed aircraft systems technology are addressed.

  10. Dynamic Test Program, Contact Power Collection for High Speed Tracked Vehicles

    DOT National Transportation Integrated Search

    1973-01-01

    A laboratory test program is defined for determining the dynamic characteristics of a contact power collection system for a high speed tracked vehicle. The use of a hybrid computer is conjuntion with hydraulic exciters to simulate the expected dynami...

  11. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  12. A heterogeneous hierarchical architecture for real-time computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skroch, D.A.; Fornaro, R.J.

    The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.

  13. A large-scale computer facility for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bailey, F. R.; Ballhaus, W. F., Jr.

    1985-01-01

    As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.

  14. NASA Aeronautics: Research and Technology Program Highlights

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This report contains numerous color illustrations to describe the NASA programs in aeronautics. The basic ideas involved are explained in brief paragraphs. The seven chapters deal with Subsonic aircraft, High-speed transport, High-performance military aircraft, Hypersonic/Transatmospheric vehicles, Critical disciplines, National facilities and Organizations & installations. Some individual aircraft discussed are : the SR-71 aircraft, aerospace planes, the high-speed civil transport (HSCT), the X-29 forward-swept wing research aircraft, and the X-31 aircraft. Critical disciplines discussed are numerical aerodynamic simulation, computational fluid dynamics, computational structural dynamics and new experimental testing techniques.

  15. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 3: Program correlation with full scale hardware tests

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.

    1980-01-01

    The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.

  16. Design and Operating Characteristics of High-Speed, Small-Bore Cylindrical-Roller Bearings

    NASA Technical Reports Server (NTRS)

    Pinel, Stanley, I.; Signer, Hans R.; Zaretsky, Erwin V.

    2000-01-01

    The computer program SHABERTH was used to analyze 35-mm-bore cylindrical roller bearings designed and manufactured for high-speed turbomachinery applications. Parametric tests of the bearings were conducted on a high-speed, high-temperature bearing tester and the results were compared with the computer predictions. Bearings with a channeled inner ring were lubricated through the inner ring, while bearings with a channeled outer ring were lubricated with oil jets. Tests were run with and without outer-ring cooling. The predicted bearing life decreased with increasing speed because of increased contact stresses caused by centrifugal load. Lower temperatures, less roller skidding, and lower power losses were obtained with channeled inner rings. Power losses calculated by the SHABERTH computer program correlated reasonably well with the test results. The Parker formula for XCAV (used in SHABERTH as a measure of oil volume in the bearing cavity) needed to be adjusted to reflect the prevailing operating conditions. The XCAV formula will need to be further refined to reflect roller bearing lubrication, ring design, cage design, and location of the cage-controlling land.

  17. Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel

    NASA Technical Reports Server (NTRS)

    Fox, C. H., Jr.

    1980-01-01

    The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.

  18. The Impact of High-Speed Internet Connectivity at Home on Eighth-Grade Student Achievement

    ERIC Educational Resources Information Center

    Kingston, Kent J.

    2013-01-01

    In the fall of 2008 Westside Community Schools - District 66, in Omaha, Nebraska implemented a one-to-one notebook computer take home model for all eighth-grade students. The purpose of this study was to determine the effect of a required yearlong one-to-one notebook computer program supported by high-speed Internet connectivity at school on (a)…

  19. Promoting High-Performance Computing and Communications. A CBO Study.

    ERIC Educational Resources Information Center

    Webre, Philip

    In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…

  20. Users' manual for the Langley high speed propeller noise prediction program (DFP-ATP)

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tarkenton, G. M.

    1989-01-01

    The use of the Dunn-Farassat-Padula Advanced Technology Propeller (DFP-ATP) noise prediction program which computes the periodic acoustic pressure signature and spectrum generated by propellers moving with supersonic helical tip speeds is described. The program has the capacity of predicting noise produced by a single-rotation propeller (SRP) or a counter-rotation propeller (CRP) system with steady or unsteady blade loading. The computational method is based on two theoretical formulations developed by Farassat. One formulation is appropriate for subsonic sources, and the other for transonic or supersonic sources. Detailed descriptions of user input, program output, and two test cases are presented, as well as brief discussions of the theoretical formulations and computational algorithms employed.

  1. Data Processing Aspects of MEDLARS

    PubMed Central

    Austin, Charles J.

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287

  2. DATA PROCESSING ASPECTS OF MEDLARS.

    PubMed

    AUSTIN, C J

    1964-01-01

    The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.

  3. An analysis for high speed propeller-nacelle aerodynamic performance prediction. Volume 1: Theory and application

    NASA Technical Reports Server (NTRS)

    Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.

    1988-01-01

    A computer program, the Propeller Nacelle Aerodynamic Performance Prediction Analysis (PANPER), was developed for the prediction and analysis of the performance and airflow of propeller-nacelle configurations operating over a forward speed range inclusive of high speed flight typical of recent propfan designs. A propeller lifting line, wake program was combined with a compressible, viscous center body interaction program, originally developed for diffusers, to compute the propeller-nacelle flow field, blade loading distribution, propeller performance, and the nacelle forebody pressure and viscous drag distributions. The computer analysis is applicable to single and coaxial counterrotating propellers. The blade geometries can include spanwise variations in sweep, droop, taper, thickness, and airfoil section type. In the coaxial mode of operation the analysis can treat both equal and unequal blade number and rotational speeds on the propeller disks. The nacelle portion of the analysis can treat both free air and tunnel wall configurations including wall bleed. The analysis was applied to many different sets of flight conditions using selected aerodynamic modeling options. The influence of different propeller nacelle-tunnel wall configurations was studied. Comparisons with available test data for both single and coaxial propeller configurations are presented along with a discussion of the results.

  4. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Dyba, G. J.

    1980-01-01

    The user's guide for the SPHERBEAN computer program for prediction of the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings is presented. The material presented is structured to guide the user in the practical and correct implementation of SPHERBEAN. Input and output, guidelines for program use, and sample executions are detailed.

  5. Nearly Interactive Parabolized Navier-Stokes Solver for High Speed Forebody and Inlet Flows

    NASA Technical Reports Server (NTRS)

    Benson, Thomas J.; Liou, May-Fun; Jones, William H.; Trefny, Charles J.

    2009-01-01

    A system of computer programs is being developed for the preliminary design of high speed inlets and forebodies. The system comprises four functions: geometry definition, flow grid generation, flow solver, and graphics post-processor. The system runs on a dedicated personal computer using the Windows operating system and is controlled by graphical user interfaces written in MATLAB (The Mathworks, Inc.). The flow solver uses the Parabolized Navier-Stokes equations to compute millions of mesh points in several minutes. Sample two-dimensional and three-dimensional calculations are demonstrated in the paper.

  6. Computational Aspects of Heat Transfer in Structures

    NASA Technical Reports Server (NTRS)

    Adelman, H. M. (Compiler)

    1982-01-01

    Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.

  7. Synchronizing Photography For High-Speed-Engine Research

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1989-01-01

    Light flashes when shaft reaches predetermined angle. Synchronization system facilitates visualization of flow in high-speed internal-combustion engines. Designed for cinematography and holographic interferometry, system synchronizes camera and light source with predetermined rotational angle of engine shaft. 10-bit resolution of absolute optical shaft encoder adapted, and 2 to tenth power combinations of 10-bit binary data computed to corresponding angle values. Pre-computed angle values programmed into EPROM's (erasable programmable read-only memories) to use as angle lookup table. Resolves shaft angle to within 0.35 degree at rotational speeds up to 73,240 revolutions per minute.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busbey, A.B.

    Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.

  9. A high level language for a high performance computer

    NASA Technical Reports Server (NTRS)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  10. Digital optical computers at the optoelectronic computing systems center

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  11. Evolving binary classifiers through parallel computation of multiple fitness cases.

    PubMed

    Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni

    2005-06-01

    This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.

  12. Dynamic and thermal analysis of high speed tapered roller bearings under combined loading

    NASA Technical Reports Server (NTRS)

    Crecelius, W. J.; Milke, D. R.

    1973-01-01

    The development of a computer program capable of predicting the thermal and kinetic performance of high-speed tapered roller bearings operating with fluid lubrication under applied axial, radial and moment loading (five degrees of freedom) is detailed. Various methods of applying lubrication can be considered as well as changes in bearing internal geometry which occur as the bearing is brought to operating speeds, loads and temperatures.

  13. An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, Y.; Yang, Y.

    In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.

  14. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  15. Review of NASA's (National Aeronautics and Space Administration) Numerical Aerodynamic Simulation Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    NASA has planned a supercomputer for computational fluid dynamics research since the mid-1970's. With the approval of the Numerical Aerodynamic Simulation Program as a FY 1984 new start, Congress requested an assessment of the program's objectives, projected short- and long-term uses, program design, computer architecture, user needs, and handling of proprietary and classified information. Specifically requested was an examination of the merits of proceeding with multiple high speed processor (HSP) systems contrasted with a single high speed processor system. The panel found NASA's objectives and projected uses sound and the projected distribution of users as realistic as possible at this stage. The multiple-HSP, whereby new, more powerful state-of-the-art HSP's would be integrated into a flexible network, was judged to present major advantages over any single HSP system.

  16. PAGOSA physics manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weseloh, Wayne N.; Clancy, Sean P.; Painter, James W.

    2010-08-01

    PAGOSA is a computational fluid dynamics computer program developed at Los Alamos National Laboratory (LANL) for the study of high-speed compressible flow and high-rate material deformation. PAGOSA is a three-dimensional Eulerian finite difference code, solving problems with a wide variety of equations of state (EOSs), material strength, and explosive modeling options.

  17. Technical Evaluation of the RHS 200 for High Speed Ferry Applications and Coast Guard Missions

    DTIC Science & Technology

    1983-12-01

    considered separate hullborne and foilborne modes of operation. The TDAS computer was programmed to select the followinq calibration curve at transducer...such brief bursts of data. The computer used in the PSD analysis was niot programmed to process and average consecutive spectra from a long term data...152 27 - ilydrostatic Computer Procgram Input (03-600 Version) .................. 158 2R - Operatincr and

  18. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  19. An analytical model for highly seperated flow on airfoils at low speeds

    NASA Technical Reports Server (NTRS)

    Zunnalt, G. W.; Naik, S. N.

    1977-01-01

    A computer program was developed to solve the low speed flow around airfoils with highly separated flow. A new flow model included all of the major physical features in the separated region. Flow visualization tests also were made which gave substantiation to the validity of the model. The computation involves the matching of the potential flow, boundary layer and flows in the separated regions. Head's entrainment theory was used for boundary layer calculations and Korst's jet mixing analysis was used in the separated regions. A free stagnation point aft of the airfoil and a standing vortex in the separated region were modelled and computed.

  20. An information retrieval system for research file data

    Treesearch

    Joan E. Lengel; John W. Koning

    1978-01-01

    Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....

  1. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System

    PubMed Central

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second. PMID:24891848

  2. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System.

    PubMed

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second.

  3. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    NASA Technical Reports Server (NTRS)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  4. Goldstone R/D High Speed Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Deutsch, L. J.; Jurgens, R. F.; Brokl, S. S.

    1984-01-01

    A digital data acquisition system that meets the requirements of several users (initially the planetary radar program) is planned for general use at Deep Space Station 14 (DSS 14). The system, now partially complete, is controlled by VAX 11/780 computer that is programmed in high level languages. A DEC Data Controller is included for moderate-speed data acquisition, low speed data display, and for a digital interface to special user-provided devices. The high-speed data acquisition is performed in devices that are being designed and built at JPL. Analog IF signals are converted to a digitized 50 MHz real signal. This signal is filtered and mixed digitally to baseband after which its phase code (a PN sequence in the case of planetary radar) is removed. It may then be accumulated (or averaged) and fed into the VAX through an FPS 5210 array processor. Further data processing before entering the VAX is thus possible (computation and accumulation of the power spectra, for example). The system is to be located in the research and development pedestal at DSS 14 for easy access by researchers in radio astronomy as well as telemetry processing and antenna arraying.

  5. A computer program for the design and analysis of low-speed airfoils, supplement

    NASA Technical Reports Server (NTRS)

    Eppler, R.; Somers, D. M.

    1980-01-01

    Three new options were incorporated into an existing computer program for the design and analysis of low speed airfoils. These options permit the analysis of airfoils having variable chord (variable geometry), a boundary layer displacement iteration, and the analysis of the effect of single roughness elements. All three options are described in detail and are included in the FORTRAN IV computer program.

  6. Taxiing, Take-Off, and Landing Simulation of the High Speed Civil Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    1999-01-01

    The aircraft industry jointly with NASA is studying enabling technologies for higher speed, longer range aircraft configurations. Higher speeds, higher temperatures, and aerodynamics are driving these newer aircraft configurations towards long, slender, flexible fuselages. Aircraft response during ground operations, although often overlooked, is a concern due to the increased fuselage flexibility. This paper discusses modeling and simulation of the High Speed Civil Transport aircraft during taxiing, take-off, and landing. Finite element models of the airframe for various configurations are used and combined with nonlinear landing gear models to provide a simulation tool to study responses to different ground input conditions. A commercial computer simulation program is used to numerically integrate the equations of motion and to compute estimates of the responses using an existing runway profile. Results show aircraft responses exceeding safe acceptable human response levels.

  7. VMOMS — A computer code for finding moment solutions to the Grad-Shafranov equation

    NASA Astrophysics Data System (ADS)

    Lao, L. L.; Wieland, R. M.; Houlberg, W. A.; Hirshman, S. P.

    1982-08-01

    Title of program: VMOMS Catalogue number: ABSH Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (See application form in this issue) Computer: PDP-10/KL10; Installation: ORNL Fusion Energy Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830, USA Operating system: TOPS 10 Programming language used: FORTRAN High speed storage required: 9000 words No. of bits in a word: 36 Overlay structure: none Peripherals used: line printer, disk drive No. of cards in combined program and test deck: 2839 Card punching code: ASCII

  8. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  9. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  10. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications.

    PubMed

    Merced-Grafals, Emmanuelle J; Dávila, Noraica; Ge, Ning; Williams, R Stanley; Strachan, John Paul

    2016-09-09

    Beyond use as high density non-volatile memories, memristors have potential as synaptic components of neuromorphic systems. We investigated the suitability of tantalum oxide (TaOx) transistor-memristor (1T1R) arrays for such applications, particularly the ability to accurately, repeatedly, and rapidly reach arbitrary conductance states. Programming is performed by applying an adaptive pulsed algorithm that utilizes the transistor gate voltage to control the SET switching operation and increase programming speed of the 1T1R cells. We show the capability of programming 64 conductance levels with <0.5% average accuracy using 100 ns pulses and studied the trade-offs between programming speed and programming error. The algorithm is also utilized to program 16 conductance levels on a population of cells in the 1T1R array showing robustness to cell-to-cell variability. In general, the proposed algorithm results in approximately 10× improvement in programming speed over standard algorithms that do not use the transistor gate to control memristor switching. In addition, after only two programming pulses (an initialization pulse followed by a programming pulse), the resulting conductance values are within 12% of the target values in all cases. Finally, endurance of more than 10(6) cycles is shown through open-loop (single pulses) programming across multiple conductance levels using the optimized gate voltage of the transistor. These results are relevant for applications that require high speed, accurate, and repeatable programming of the cells such as in neural networks and analog data processing.

  11. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  12. High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs

    NASA Technical Reports Server (NTRS)

    Whitfield, C. E.; Mani, R.; Gliebe, P. R.

    1990-01-01

    The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.

  13. High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs

    NASA Astrophysics Data System (ADS)

    Whitfield, C. E.; Mani, R.; Gliebe, P. R.

    1990-07-01

    The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.

  14. The finite element method in low speed aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1975-01-01

    The finite element procedure is shown to be of significant impact in design of the 'computational wind tunnel' for low speed aerodynamics. The uniformity of the mathematical differential equation description, for viscous and/or inviscid, multi-dimensional subsonic flows about practical aerodynamic system configurations, is utilized to establish the general form of the finite element algorithm. Numerical results for inviscid flow analysis, as well as viscous boundary layer, parabolic, and full Navier Stokes flow descriptions verify the capabilities and overall versatility of the fundamental algorithm for aerodynamics. The proven mathematical basis, coupled with the distinct user-orientation features of the computer program embodiment, indicate near-term evolution of a highly useful analytical design tool to support computational configuration studies in low speed aerodynamics.

  15. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  16. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    NASA Technical Reports Server (NTRS)

    Wallace, D. B.

    1980-01-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  17. Computer program for analysis of high speed, single row, angular contact, spherical roller bearing, SASHBEAN. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Aggarwal, Arun K.

    1993-01-01

    The computer program SASHBEAN (Sikorsky Aircraft Spherical Roller High Speed Bearing Analysis) analyzes and predicts the operating characteristics of a Single Row, Angular Contact, Spherical Roller Bearing (SRACSRB). The program runs on an IBM or IBM compatible personal computer, and for a given set of input data analyzes the bearing design for it's ring deflections (axial and radial), roller deflections, contact areas and stresses, induced axial thrust, rolling element and cage rotation speeds, lubrication parameters, fatigue lives, and amount of heat generated in the bearing. The dynamic loading of rollers due to centrifugal forces and gyroscopic moments, which becomes quite significant at high speeds, is fully considered in this analysis. For a known application and it's parameters, the program is also capable of performing steady-state and time-transient thermal analyses of the bearing system. The steady-state analysis capability allows the user to estimate the expected steady-state temperature map in and around the bearing under normal operating conditions. On the other hand, the transient analysis feature provides the user a means to simulate the 'lost lubricant' condition and predict a time-temperature history of various critical points in the system. The bearing's 'time-to-failure' estimate may also be made from this (transient) analysis by considering the bearing as failed when a certain temperature limit is reached in the bearing components. The program is fully interactive and allows the user to get started and access most of its features with a minimal of training. For the most part, the program is menu driven, and adequate help messages were provided to guide a new user through various menu options and data input screens. All input data, both for mechanical and thermal analyses, are read through graphical input screens, thereby eliminating any need of a separate text editor/word processor to edit/create data files. Provision is also available to select and view the contents of output files on the monitor screen if no paper printouts are required. A separate volume (Volume-2) of this documentation describes, in detail, the underlying mathematical formulations, assumptions, and solution algorithms of this program.

  18. Advanced Multifunctional Materials for High Speed Combatant Hulls

    DTIC Science & Technology

    2015-11-25

    Combatant Hulls 5a. CONTRACT NUMBER 5b. GRANT NUMBER N00014-14-1-0269 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Mark S. Mirotznik 5d. PROJECT...High Speed Combatant Hulls ’ PI Information: Mark S. Mirotznik, Associate Professor Tel: (302) 831 -4241 Department of Electrical and Computer... HULLS FINAL TECHNICAL REPORT 1.0 Abstract In this ONR funded project investigators at the University of Delaware’s Department of Electrical

  19. Multiprocessor programming environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.B.; Fornaro, R.

    Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.

  20. Impact of high-κ dielectric and metal nanoparticles in simultaneous enhancement of programming speed and retention time of nano-flash memory

    NASA Astrophysics Data System (ADS)

    Pavel, Akeed A.; Khan, Mehjabeen A.; Kirawanich, Phumin; Islam, N. E.

    2008-10-01

    A methodology to simulate memory structures with metal nanocrystal islands embedded as floating gate in a high-κ dielectric material for simultaneous enhancement of programming speed and retention time is presented. The computational concept is based on a model for charge transport in nano-scaled structures presented earlier, where quantum mechanical tunneling is defined through the wave impedance that is analogous to the transmission line theory. The effects of substrate-tunnel dielectric conduction band offset and metal work function on the tunneling current that determines the programming speed and retention time is demonstrated. Simulation results confirm that a high-κ dielectric material can increase programming current due to its lower conduction band offset with the substrate and also can be effectively integrated with suitable embedded metal nanocrystals having high work function for efficient data retention. A nano-memory cell designed with silver (Ag) nanocrystals embedded in Al 2O 3 has been compared with similar structure consisting of Si nanocrystals in SiO 2 to validate the concept.

  1. INFORM: An interactive data collection and display program with debugging capability

    NASA Technical Reports Server (NTRS)

    Cwynar, D. S.

    1980-01-01

    A computer program was developed to aid ASSEMBLY language programmers of mini and micro computers in solving the man machine communications problems that exist when scaled integers are involved. In addition to producing displays of quasi-steady state values, INFORM provides an interactive mode for debugging programs, making program patches, and modifying the displays. Auxiliary routines SAMPLE and DATAO add dynamic data acquisition and high speed dynamic display capability to the program. Programming information and flow charts to aid in implementing INFORM on various machines together with descriptions of all supportive software are provided. Program modifications to satisfy the individual user's needs are considered.

  2. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  3. The numerical simulation of a high-speed axial flow compressor

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Adamczyk, John J.

    1991-01-01

    The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.

  4. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices.

    PubMed

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag 5 In 5 Sb 60 Te 30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  5. An ultrafast programmable electrical tester for enabling time-resolved, sub-nanosecond switching dynamics and programming of nanoscale memory devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu

    2017-12-01

    Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag5In5Sb60Te30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.

  6. Measurement of fault latency in a digital avionic mini processor, part 2

    NASA Technical Reports Server (NTRS)

    Mcgough, J.; Swern, F.

    1983-01-01

    The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are described. Several earlier programs were reprogrammed, expanding the instruction set to capitalize on the full power of the BDX-930 computer. As a final demonstration of fault coverage an extensive, 3-axis, high performance flght control computation was added. The stages in the development of a CPU self-test program emphasizing the relationship between fault coverage, speed, and quantity of instructions were demonstrated.

  7. Providing a parallel and distributed capability for JMASS using SPEEDES

    NASA Astrophysics Data System (ADS)

    Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob

    2002-07-01

    The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.

  8. High speed cylindrical roller bearing analysis. SKF computer program CYBEAN. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Dyba, G. J.; Kleckner, R. J.

    1981-01-01

    CYBEAN (CYlindrical BEaring ANalysis) was created to detail radially loaded, aligned and misaligned cylindrical roller bearing performance under a variety of operating conditions. Emphasis was placed on detailing the effects of high speed, preload and system thermal coupling. Roller tilt, skew, radial, circumferential and axial displacement as well as flange contact were considered. Variable housing and flexible out-of-round outer ring geometries, and both steady state and time transient temperature calculations were enabled. The complete range of elastohydrodynamic contact considerations, employing full and partial film conditions were treated in the computation of raceway and flange contacts. The practical and correct implementation of CYBEAN is discussed. The capability to execute the program at four different levels of complexity was included. In addition, the program was updated to properly direct roller-to-raceway contact load vectors automatically in those cases where roller or ring profiles have small radii of curvature. Input and output architectures containing guidelines for use and two sample executions are detailed.

  9. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  10. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  11. Generation 1.5 High Speed Civil Transport (HSCT) Exhaust Nozzle Program

    NASA Technical Reports Server (NTRS)

    Thayer, E. B.; Gamble, E. J.; Guthrie, A. R.; Kehret, D. F.; Barber, T. J.; Hendricks, G. J.; Nagaraja, K. S.; Minardi, J. E.

    2004-01-01

    The objective of this program was to conduct an experimental and analytical evaluation of low noise exhaust nozzles suitable for future High-Speed Civil Transport (HSCT) aircraft. The experimental portion of the program involved parametric subscale performance model tests of mixer/ejector nozzles in the takeoff mode, and high-speed tests of mixer/ejectors converted to two-dimensional convergent-divergent (2-D/C-D), plug, and single expansion ramp nozzles (SERN) in the cruise mode. Mixer/ejector results show measured static thrust coefficients at secondary flow entrainment levels of 70 percent of primary flow. Results of the high-speed performance tests showed that relatively long, straight-wall, C-D nozzles could meet supersonic cruise thrust coefficient goal of 0.982; but the plug, ramp, and shorter C-D nozzles required isentropic contours to reach the same level of performance. The computational fluid dynamic (CFD) study accurately predicted mixer/ejector pressure distributions and shock locations. Heat transfer studies showed that a combination of insulation and convective cooling was more effective than film cooling for nonafterburning, low-noise nozzles. The thrust augmentation study indicated potential benefits for use of ejector nozzles in the subsonic cruise mode if the ejector inlet contains a sonic throat plane.

  12. High Speed Cylindrical Roller Bearing Analysis, SKF Computer Program CYBEAN. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Pirvics, J.

    1978-01-01

    The CYBEAN (CYlindrical BEaring ANalysis) program was created to detail radially loaded, aligned and misaligned Cylindrical roller bearing performance under a variety of operating conditions. The models and associated mathematics used within CYBEAN are described. The user is referred to the material for formulation assumptions and algorithm detail.

  13. A vectorization of the Jameson-Caughey NYU transonic swept-wing computer program FLO-22-V1 for the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.

    1978-01-01

    The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.

  14. On The Export Control Of High Speed Imaging For Nuclear Weapons Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Scott Avery; Altherr, Michael Robert

    Since the Manhattan Project, the use of high-speed photography, and its cousins flash radiography1 and schieleren photography have been a technological proliferation concern. Indeed, like the supercomputer, the development of high-speed photography as we now know it essentially grew out of the nuclear weapons program at Los Alamos2,3,4. Naturally, during the course of the last 75 years the technology associated with computers and cameras has been export controlled by the United States and others to prevent both proliferation among non-P5-nations and technological parity among potential adversaries among P5 nations. Here we revisit these issues as they relate to high-speed photographicmore » technologies and make recommendations about how future restrictions, if any, should be guided.« less

  15. Reynolds Number Effects on Leading Edge Radius Variations of a Supersonic Transport at Transonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. M. B.; Wahls, R. A.; Owens, L. R.

    2001-01-01

    A computational study focused on leading-edge radius effects and associated Reynolds number sensitivity for a High Speed Civil Transport configuration at transonic conditions was conducted as part of NASA's High Speed Research Program. The primary purposes were to assess the capabilities of computational fluid dynamics to predict Reynolds number effects for a range of leading-edge radius distributions on a second-generation supersonic transport configuration, and to evaluate the potential performance benefits of each at the transonic cruise condition. Five leading-edge radius distributions are described, and the potential performance benefit including the Reynolds number sensitivity for each is presented. Computational results for two leading-edge radius distributions are compared with experimental results acquired in the National Transonic Facility over a broad Reynolds number range.

  16. Lecture notes in economics and mathematical system. Volume 150: Supercritical wing sections 3

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Garabedian, P.; Korn, D.

    1977-01-01

    Application of computational fluid dynamics to the design and analysis of supercritical wing sections is discussed. Computer programs used to study the flight of modern aircraft at high subsonic speeds are listed and described. The cascades of shockless transonic airfoils that are expected to increase the efficiency of compressors and turbines are included.

  17. NASA information sciences and human factors program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Data Systems Program consists of research and technology devoted to controlling, processing, storing, manipulating, and analyzing space-derived data. The objectives of the program are to provide the technology advancements needed to enable affordable utilization of space-derived data, to increase substantially the capability for future missions of on-board processing and recording and to provide high-speed, high-volume computational systems that are anticipated for missions such as the evolutionary Space Station and Earth Observing System.

  18. High speed cylindrical roller bearing analysis, SKF computer program CYBEAN. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Pirvics, J.

    1978-01-01

    The CYBEAN (Cylindrical Bearing Analysis) was created to detail radially loaded, aligned and misaligned cylindrical roller bearing performance under a variety of operating conditions. Emphasis was placed on detailing the effects of high speed, preload and system thermal coupling. Roller tilt, skew, radial, circumferential and axial displacement as well as flange contact were considered. Variable housing and flexible out-of-round outer ring geometries, and both steady state and time transient temperature calculations were enabled. The complete range of elastohydrodynamic contact considerations, employing full and partial film conditions were treated in the computation of raceway and flange contacts. Input and output architectures containing guidelines for use and a sample execution are detailed.

  19. High-speed inlet research program and supporting analysis

    NASA Technical Reports Server (NTRS)

    Coltrin, Robert E.

    1990-01-01

    The technology challenges faced by the high speed inlet designer are discussed by describing the considerations that went into the design of the Mach 5 research inlet. It is shown that the emerging three dimensional viscous computational fluid dynamics (CFD) flow codes, together with small scale experiments, can be used to guide larger scale full inlet systems research. Then, in turn, the results of the large scale research, if properly instrumented, can be used to validate or at least to calibrate the CFD codes.

  20. Application of Multi-Frequency Modulation (MFM) for High-Speed Data Communications to a Voice Frequency Channel

    DTIC Science & Technology

    1990-06-01

    reader is cautioned that computer programs developed in this research may not have been exercised for all cases of interest. While every effort has been...Source of Funding Numbers _. Program Element No Project No I Task No I Work Unit Accession No 11 Title (Include security classflcation) APPLICATION OF...formats. Previous applications of these encoding formats were on industry standard computers (PC) over a 16-20 klIz channel. This report discusses the

  1. Design and Operating Characteristics of High-Speed, Small-Bore, Angular-Contact Ball Bearings

    NASA Technical Reports Server (NTRS)

    Pinel, Stanley I.; Signer, Hans R.; Zaretsky, Erwin V.

    1998-01-01

    The computer program SHABERTH was used to analyze 35-mm-bore, angular-contact ball bearings designed and manufactured for high-speed turbomachinery applications. Parametric tests of the bearings were conducted on a high-speed, high-temperature bearing tester and were compared with the computer predictions. Four bearing and cage designs were studied. The bearings were lubricated either by jet lubrication or through the split inner ring with and without outer-ring cooling. The predicted bearing life decreased with increasing speed because of increased operating contact stresses caused by changes in contact angle and centrifugal load. For thrust loads only, the difference in calculated life for the 24 deg. and 30 deg. contact-angle bearings was insignificant. However, for combined loading, the 24 deg. contact-angle bearing gave longer life. For split-inner-ring bearings, optimal operating conditions were obtained with a 24 deg. contact angle and an inner-ring, land-guided cage, using outer-ring cooling in conjunction with low lubricant flow rates. Lower temperature and power losses were obtained with a single-outer-ring, land-guided cage for the 24 deg. contact-angle bearing having a relieved inner ring and partially relieved outer ring. Inner-ring temperatures were independent of lubrication mode and cage design. In comparison with measured values, reasonably good engineering correlation was obtained using the computer program SHABERTH for predicted bearing power loss and for inner- and outer-ring temperatures. The Parker formula for XCAV (used in SHABERTH, a measure of oil volume in the bearing cavity) may need to be refined to reflect bearing lubrication mode, cage design, and location of cage-controlling land.

  2. CNSFV code development, virtual zone Navier-Stokes computations of oscillating control surfaces and computational support of the laminar flow supersonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Klopfer, Goetz H.

    1993-01-01

    The work performed during the past year on this cooperative agreement covered two major areas and two lesser ones. The two major items included further development and validation of the Compressible Navier-Stokes Finite Volume (CNSFV) code and providing computational support for the Laminar Flow Supersonic Wind Tunnel (LFSWT). The two lesser items involve a Navier-Stokes simulation of an oscillating control surface at transonic speeds and improving the basic algorithm used in the CNSFV code for faster convergence rates and more robustness. The work done in all four areas is in support of the High Speed Research Program at NASA Ames Research Center.

  3. Optimization of Angular-Momentum Biases of Reaction Wheels

    NASA Technical Reports Server (NTRS)

    Lee, Clifford; Lee, Allan

    2008-01-01

    RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.

  4. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.

    1990-01-01

    This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.

  5. Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains

    PubMed Central

    Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping

    2017-01-01

    This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097

  6. Genetic Parallel Programming: design and implementation.

    PubMed

    Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong

    2006-01-01

    This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.

  7. Combining high performance simulation, data acquisition, and graphics display computers

    NASA Technical Reports Server (NTRS)

    Hickman, Robert J.

    1989-01-01

    Issues involved in the continuing development of an advanced simulation complex are discussed. This approach provides the capability to perform the majority of tests on advanced systems, non-destructively. The controlled test environments can be replicated to examine the response of the systems under test to alternative treatments of the system control design, or test the function and qualification of specific hardware. Field tests verify that the elements simulated in the laboratories are sufficient. The digital computer is hosted by a Digital Equipment Corp. MicroVAX computer with an Aptec Computer Systems Model 24 I/O computer performing the communication function. An Applied Dynamics International AD100 performs the high speed simulation computing and an Evans and Sutherland PS350 performs on-line graphics display. A Scientific Computer Systems SCS40 acts as a high performance FORTRAN program processor to support the complex, by generating numerous large files from programs coded in FORTRAN that are required for the real time processing. Four programming languages are involved in the process, FORTRAN, ADSIM, ADRIO, and STAPLE. FORTRAN is employed on the MicroVAX host to initialize and terminate the simulation runs on the system. The generation of the data files on the SCS40 also is performed with FORTRAN programs. ADSIM and ADIRO are used to program the processing elements of the AD100 and its IOCP processor. STAPLE is used to program the Aptec DIP and DIA processors.

  8. Measuring Speed Using a Computer--Several Techniques.

    ERIC Educational Resources Information Center

    Pearce, Jon M.

    1988-01-01

    Introduces three different techniques to facilitate the measurement of speed and the associated kinematics and dynamics using a computer. Discusses sensing techniques using optical or ultrasonic sensors, interfacing with a computer, software routines for the interfaces, and other applications. Provides circuit diagrams, pictures, and a program to…

  9. Reynolds Number Effects on a Supersonic Transport at Transonic Conditions

    NASA Technical Reports Server (NTRS)

    Wahls, R. N.; Owens, L. R.; Rivers, S. M. B.

    2001-01-01

    A High Speed Civil Transport configuration was tested in the National Transonic Facility at the NASA Langley Research Center as part of NASA's High Speed Research Program. The primary purposes of the tests were to assess Reynolds number scale effects and the high Reynolds number aerodynamic characteristics of a realistic, second generation supersonic transport while providing data for the assessment of computational methods. The tests included longitudinal and lateral/directional studies at low speed high-lift and transonic conditions across a range of Reynolds numbers from that available in conventional wind tunnels to near flight conditions. Results are presented which focus on both the Reynolds number and static aeroelastic sensitivities of longitudinal characteristics at Mach 0.90 for a configuration without an empennage.

  10. A General Method for Automatic Computation of Equilibrium Compositions and Theoretical Rocket Performance of Propellants

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Zeleznik, Frank J.; Huff, Vearl N.

    1959-01-01

    A general computer program for chemical equilibrium and rocket performance calculations was written for the IBM 650 computer with 2000 words of drum storage, 60 words of high-speed core storage, indexing registers, and floating point attachments. The program is capable of carrying out combustion and isentropic expansion calculations on a chemical system that may include as many as 10 different chemical elements, 30 reaction products, and 25 pressure ratios. In addition to the equilibrium composition, temperature, and pressure, the program calculates specific impulse, specific impulse in vacuum, characteristic velocity, thrust coefficient, area ratio, molecular weight, Mach number, specific heat, isentropic exponent, enthalpy, entropy, and several thermodynamic first derivatives.

  11. High-speed multiple sequence alignment on a reconfigurable platform.

    PubMed

    Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf

    2006-01-01

    Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.

  12. Preliminary design of a high speed civil transport: The Opus 0-001

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Based on research into the technology and issues surrounding the design, development, and operation of a second generation High Speed Civil Transport, HSCT, the Opus 0-001 team completed the preliminary design of a sixty passenger, three engine aircraft. The design of this aircraft was performed using a computer program which the team wrote. This program automatically computed the geometric, aerodynamic, and performance characteristic of an aircraft whose preliminary geometry was specified. The Opus 0-001 aircraft was designed for a cruise Mach number of 2.2, a range of 4,700 nautical miles and its design was based in current or very near term technology. Its small size was a consequence of an emphasis on a profitable, low cost program, capable of delivering tomorrow's passengers in style and comfort at prices that make it an attractive competitor to both current and future subsonic transport aircraft. Several hundred thousand cases of Cruise Mach number, aircraft size and cost breakdown were investigated to obtain costs and revenues for which profit was calculated. The projected unit flyaway cost was $92.0 million per aircraft.

  13. Configuration Aerodynamics: Past - Present - Future

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Agrawal, Shreekant; Bencze, Daniel P.; Kulfan, Robert M.; Wilson, Douglas L.

    1999-01-01

    The Configuration Aerodynamics (CA) element of the High Speed Research (HSR) program is managed by a joint NASA and Industry team, referred to as the Technology Integration Development (ITD) team. This team is responsible for the development of a broad range of technologies for improved aerodynamic performance and stability and control characteristics at subsonic to supersonic flight conditions. These objectives are pursued through the aggressive use of advanced experimental test techniques and state of the art computational methods. As the HSR program matures and transitions into the next phase the objectives of the Configuration Aerodynamics ITD are being refined to address the drag reduction needs and stability and control requirements of High Speed Civil Transport (HSCT) aircraft. In addition, the experimental and computational tools are being refined and improved to meet these challenges. The presentation will review the work performed within the Configuration Aerodynamics element in 1994 and 1995 and then discuss the plans for the 1996-1998 time period. The final portion of the presentation will review several observations of the HSR program and the design activity within Configuration Aerodynamics.

  14. Applied Computational Fluid Dynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  15. Bibliography on propulsion airframe integration technologies for high-speed civil transport applications, 1980-1991

    NASA Technical Reports Server (NTRS)

    Anderson, David J.; Mizukami, Masashi

    1993-01-01

    NASA has initiated the High Speed Research (HSR) program with the goal to develop technologies for a new generation, economically viable, environmentally acceptable, supersonic transport (SST) called the High Speed Civil Transport (HSCT). A significant part of this effort is expected to be in multidisciplinary systems integration, such as in propulsion airframe integration (PAI). In order to assimilate the knowledge database on PAI for SST type aircraft, a bibliography on this subject was compiled. The bibliography with over 1200 entries, full abstracts, and indexes. Related topics are also covered, such as the following: engine inlets, engine cycles, nozzles, existing supersonic cruise aircraft, noise issues, computational fluid dynamics, aerodynamics, and external interference. All identified documents from 1980 through early 1991 are included; this covers the latter part of the NASA Supersonic Cruise Research (SCR) program and the beginnings of the HSR program. In addition, some pre-1980 documents of significant merit or reference value are also included. The references were retrieved via a computerized literature search using the NASA RECON database system.

  16. A finite element approach for solution of the 3D Euler equations

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.

    1986-01-01

    Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.

  17. Data Handling and Communication

    NASA Astrophysics Data System (ADS)

    Hemmer, FréDéRic Giorgio Innocenti, Pier

    The following sections are included: * Introduction * Computing Clusters and Data Storage: The New Factory and Warehouse * Local Area Networks: Organizing Interconnection * High-Speed Worldwide Networking: Accelerating Protocols * Detector Simulation: Events Before the Event * Data Analysis and Programming Environment: Distilling Information * World Wide Web: Global Networking * References

  18. Small Business Innovation Research (SBIR) Program, FY 1994. Program Solicitation 94.1, Closing Date: 14 January 1994

    DTIC Science & Technology

    1994-01-01

    is to design and develop a diode laser and ssociated driver circuitry with i•eh peak power, high pulse repetition frequency (PRF), and good beam...Computer modeling tools shall be used to design and optimize breadboard model of a multi-terminal high speed ring bus for flight critical applications... design , fabricate, and test a fiber optic interface device which will improve coupling of high energy, pulsed lasers into commercial fiber optics at a

  19. Low temperature Grüneisen parameter of cubic ionic crystals

    NASA Astrophysics Data System (ADS)

    Batana, Alicia; Monard, María C.; Rosario Soriano, María

    1987-02-01

    Title of program: CAROLINA Catalogue number: AATG Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland (see application form in this issue) Computer: IBM/370, Model 158; Installation: Centro de Tecnología y Ciencia de Sistemas, Universidad de Buenos Aires Operating system: VM/370 Programming language used: FORTRAN High speed storage required: 3 kwords No. of bits in a word: 32 Peripherals used: disk IBM 3340/70 MB No. of lines in combined program and test deck: 447

  20. Local-Area-Network Simulator

    NASA Technical Reports Server (NTRS)

    Gibson, Jim; Jordan, Joe; Grant, Terry

    1990-01-01

    Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.

  1. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  2. Development of a Computing Cluster At the University of Richmond

    NASA Astrophysics Data System (ADS)

    Carbonneau, J.; Gilfoyle, G. P.; Bunn, E. F.

    2010-11-01

    The University of Richmond has developed a computing cluster to support the massive simulation and data analysis requirements for programs in intermediate-energy nuclear physics, and cosmology. It is a 20-node, 240-core system running Red Hat Enterprise Linux 5. We have built and installed the physics software packages (Geant4, gemc, MADmap...) and developed shell and Perl scripts for running those programs on the remote nodes. The system has a theoretical processing peak of about 2500 GFLOPS. Testing with the High Performance Linpack (HPL) benchmarking program (one of the standard benchmarks used by the TOP500 list of fastest supercomputers) resulted in speeds of over 900 GFLOPS. The difference between the maximum and measured speeds is due to limitations in the communication speed among the nodes; creating a bottleneck for large memory problems. As HPL sends data between nodes, the gigabit Ethernet connection cannot keep up with the processing power. We will show how both the theoretical and actual performance of the cluster compares with other current and past clusters, as well as the cost per GFLOP. We will also examine the scaling of the performance when distributed to increasing numbers of nodes.

  3. New technology in turbine aerodynamics.

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.; Moffitt, T. P.

    1972-01-01

    Cursory review of some recent work that has been done in turbine aerodynamic research. Topics discussed include the aerodynamic effect of turbine coolant, high work-factor (ratio of stage work to square of blade speed) turbines, and computer methods for turbine design and performance prediction. Experimental cooled-turbine aerodynamics programs using two-dimensional cascades, full annular cascades, and cold rotating turbine stage tests are discussed with some typical results presented. Analytically predicted results for cooled blade performance are compared to experimental results. The problems and some of the current programs associated with the use of very high work factors for fan-drive turbines of high-bypass-ratio engines are discussed. Computer programs have been developed for turbine design-point performance, off-design performance, supersonic blade profile design, and the calculation of channel velocities for subsonic and transonic flowfields. The use of these programs for the design and analysis of axial and radial turbines is discussed.

  4. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  5. Proceedings of the Ship Control Systems Symposium (7th) Held in Bath, England on 24-27 September 1981. Volume 2

    DTIC Science & Technology

    1984-09-27

    more effectively structured and transportable simulation program modules and powerful support software, are already in place for current use. The early...incorporates the various limits and conditions described for the major acceleration categories. (14) Speed Loop This module Is executed when the shaft speed...available, high confidence models and modules . A great leverage is gained by using generally available general purpose computers and associated support

  6. MIDAS, prototype Multivariate Interactive Digital Analysis System, Phase 1. Volume 2: Diagnostic system

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Christenson, D.; Gordon, M.; Kistler, R.; Lampert, S.; Marshall, R.; Mclaughlin, R.

    1974-01-01

    The MIDAS System is a third-generation, fast, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughout. The hardware and software generated in Phase I of the over-all program are described. The system contains a mini-computer to control the various high-speed processing elements in the data path and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating 2 x 105 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation. Diagnostic programs used to test MIDAS' operations are presented.

  7. FPGA Boot Loader and Scrubber

    NASA Technical Reports Server (NTRS)

    Wade, Randall S.; Jones, Bailey

    2009-01-01

    A computer program loads configuration code into a Xilinx field-programmable gate array (FPGA), reads back and verifies that code, reloads the code if an error is detected, and monitors the performance of the FPGA for errors in the presence of radiation. The program consists mainly of a set of VHDL files (wherein "VHDL" signifies "VHSIC Hardware Description Language" and "VHSIC" signifies "very-high-speed integrated circuit").

  8. Orthorectification by Using Gpgpu Method

    NASA Astrophysics Data System (ADS)

    Sahin, H.; Kulur, S.

    2012-07-01

    Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.

  9. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    PubMed

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-05-04

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  10. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  11. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  12. The development of an interim generalized gate logic software simulator

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Nemeroff, S.

    1985-01-01

    A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.

  13. Investigations for Supersonic Transports at Transonic and Supersonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. Melissa B.; Owens, Lewis R.; Wahls, Richard A.

    2007-01-01

    Several computational studies were conducted as part of NASA s High Speed Research Program. Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-Speed Research program are given. The effects of grid topology and the representation of the actual wind tunnel model geometry are also investigated. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin-Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.

  14. High performance computing and communications: Advancing the frontiers of information technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less

  15. shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics.

    PubMed

    Khomtchouk, Bohdan B; Hennessy, James R; Wahlestedt, Claes

    2017-01-01

    Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105-107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.

  16. Microprocessor Control of Low Speed VSTOL Flight.

    DTIC Science & Technology

    1979-06-08

    Analog IAS Indicated Air Speed I/O Input/Output KIAS Knots, Indicated Air Speed NATOPS Naval Air Training and Operating Procedures Standardization SAS...computer programming necessary in the research, and contain, in the form of computer- generated time histories, the results of the project. -17- I...of the aircraft causes airflow over the wings and therefore produces aerodynamic lift. As the transition progresses, wing- generated lift gradually

  17. High-Speed Research: Sonic Boom, volume 2

    NASA Technical Reports Server (NTRS)

    Darden, Christine M. (Compiler)

    1992-01-01

    A High-Speed Sonic Boom Workshop was held at NASA Langley Research Center on February 25-27, 1992. The purpose of the workshop was to make presentations on current research activities and accomplishments and to assess progress in the area of sonic boom since the program was initiated in FY-90. Twenty-nine papers were presented during the 2-1/2 day workshop. Attendees included representatives from academia, industry, and government who are actively involved in sonic-boom research. Volume 2 contains papers related to low sonic-boom design and analysis using both linear theory and higher order computational fluid dynamics (CFD) methods.

  18. LES, DNS and RANS for the analysis of high-speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Colucci, P. J.; Taulbee, D. B.; Givi, P.

    1995-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Aug. 1994 - 31 Jul. 1995, we have focused our efforts on two programs: (1) developments of explicit algebraic moment closures for statistical descriptions of compressible reacting flows and (2) development of Monte Carlo numerical methods for LES of chemically reacting flows.

  19. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  20. WTO — a deterministic approach to 4-fermion physics

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    1996-09-01

    The program WTO, which is designed for computing cross sections and other relevant observables in the e+e- annihilation into four fermions, is described. The various quantities are computed over both a completely inclusive experimental set-up and a realistic one, i.e. with cuts on the final state energies, final state angles, scattering angles and final state invariant masses. Initial state QED corrections are included by means of the structure function approach while final state QCD corrections are applicable in their naive formulation. A gauge restoring mechanism is included according to the Fermion-Loop scheme. The program structure is highly modular and particular care has been devoted to computing efficiency and speed.

  1. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.

    1991-01-01

    This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.

  2. Method to predict external store carriage characteristics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1988-01-01

    Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.

  3. Numerical studies of unsteady two dimensional subsonic flows using the ICE method. Ph.D. Thesis - Toledo Univ.

    NASA Technical Reports Server (NTRS)

    Wieber, P. R.

    1973-01-01

    A numerical program was developed to compute transient compressible and incompressible laminar flows in two dimensions with multicomponent mixing and chemical reaction. The algorithm used the Los Alamos Scientific Laboratory ICE (Implicit Continuous-Fluid Eulerian) method as its base. The program can compute both high and low speed compressible flows. The numerical program incorporating the stabilization techniques was quite successful in treating both old and new problems. Detailed calculations of coaxial flow very close to the entry plane were possible. The program treated complex flows such as the formation and downstream growth of a recirculation cell. An implicit solution of the species equation predicted mixing and reaction rates which compared favorably with the literature.

  4. Computer programs for thermodynamic and transport properties of hydrogen (tabcode-II)

    NASA Technical Reports Server (NTRS)

    Roder, H. M.; Mccarty, R. D.; Hall, W. J.

    1972-01-01

    The thermodynamic and transport properties of para and equilibrium hydrogen have been programmed into a series of computer routines. Input variables are the pair's pressure-temperature and pressure-enthalpy. The programs cover the range from 1 to 5000 psia with temperatures from the triple point to 6000 R or enthalpies from minus 130 BTU/lb to 25,000 BTU/lb. Output variables are enthalpy or temperature, density, entropy, thermal conductivity, viscosity, at constant volume, the heat capacity ratio, and a heat transfer parameter. Property values on the liquid and vapor boundaries are conveniently obtained through two small routines. The programs achieve high speed by using linear interpolation in a grid of precomputed points which define the surface of the property returned.

  5. An interactive computer program for sizing spacecraft momentum storage devices

    NASA Technical Reports Server (NTRS)

    Wilcox, F. J., Jr.

    1980-01-01

    An interactive computer program was developed which computes the sizing requirements for nongimbled reaction wheels, control moment gyros (CMG), and dual momentum control devices (DMCD) used in Earth-orbiting spacecraft. The program accepts as inputs the spacecraft's environmental disturbance torques, rotational inertias, maneuver rates, and orbital data. From these inputs, wheel weights are calculated for a range of radii and rotational speeds. The shape of the momentum wheel may be chosen to be either a hoop, solid cylinder, or annular cylinder. The program provides graphic output illustrating the trade-off potential between the weight, radius, and wheel speed. A number of the intermediate calculations such as the X-, Y-, and Z-axis total momentum, the momentum absorption requirements for reaction wheels, CMG's, DMCD's, and basic orbit analysis information are also provided as program output.

  6. Speeding Up Ecological and Evolutionary Computations in R; Essentials of High Performance Computing for Biologists

    PubMed Central

    Visser, Marco D.; McMahon, Sean M.; Merow, Cory; Dixon, Philip M.; Record, Sydne; Jongejans, Eelke

    2015-01-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1–S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research. PMID:25811842

  7. Assessment of computational issues associated with analysis of high-lift systems

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.; Jones, Kenneth M.; Waggoner, Edgar G.

    1992-01-01

    Thin-layer Navier-Stokes calculations for wing-fuselage configurations from subsonic to hypersonic flow regimes are now possible. However, efficient, accurate solutions for using these codes for two- and three-dimensional high-lift systems have yet to be realized. A brief overview of salient experimental and computational research is presented. An assessment of the state-of-the-art relative to high-lift system analysis and identification of issues related to grid generation and flow physics which are crucial for computational success in this area are also provided. Research in support of the high-lift elements of NASA's High Speed Research and Advanced Subsonic Transport Programs which addresses some of the computational issues is presented. Finally, fruitful areas of concentrated research are identified to accelerate overall progress for high lift system analysis and design.

  8. FORTRAN 4 computer program for calculating critical speeds of rotating shafts

    NASA Technical Reports Server (NTRS)

    Trivisonno, R. J.

    1973-01-01

    A FORTRAN 4 computer program, written for the IBM DCS 7094/7044 computer, that calculates the critical speeds of rotating shafts is described. The shaft may include bearings, couplings, extra masses (nonshaft mass), and disks for the gyroscopic effect. Shear deflection is also taken into account, and provision is made in the program for sections of the shaft that are tapered. The boundary conditions at the ends of the shaft can be fixed (deflection and slope equal to zero) or free (shear and moment equal to zero). The fixed end condition enables the program to calculate the natural frequencies of cantilever beams. Instead of using the lumped-parameter method, the program uses continuous integration of the differential equations of beam flexure across different shaft sections. The advantages of this method over the usual lumped-parameter method are less data preparation and better approximation of the distribution of the mass of the shaft. A main feature of the program is the nature of the output. The Calcomp plotter is used to produce a drawing of the shaft with superimposed deflection curves at the critical speeds, together with all pertinent information related to the shaft.

  9. Quiet High-Speed Fan

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth; Repp, Russ; Weir, Donald S.

    1996-01-01

    A calibration of the acoustic and aerodynamic prediction methods was performed and a baseline fan definition was established and evaluated to support the quiet high speed fan program. A computational fluid dynamic analysis of the NASA QF-12 Fan rotor, using the DAWES flow simulation program was performed to demonstrate and verify the causes of the relatively poor aerodynamic performance observed during the fan test. In addition, the rotor flowfield characteristics were qualitatively compared to the acoustic measurements to identify the key acoustic characteristics of the flow. The V072 turbofan source noise prediction code was used to generate noise predictions for the TFE731-60 fan at three operating conditions and compared to experimental data. V072 results were also used in the Acoustic Radiation Code to generate far field noise for the TFE731-60 nacelle at three speed points for the blade passage tone. A full 3-D viscous flow simulation of the current production TFE731-60 fan rotor was performed with the DAWES flow analysis program. The DAWES analysis was used to estimate the onset of multiple pure tone noise, based on predictions of inlet shock position as a function of the rotor tip speed. Finally, the TFE731-60 fan rotor wake structure predicted by the DAWES program was used to define a redesigned stator with the leading edge configured to minimize the acoustic effects of rotor wake / stator interaction, without appreciably degrading performance.

  10. Experimental and Computational Sonic Boom Assessment of Lockheed-Martin N+2 Low Boom Models

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Durston, Donald A.; Elmiligui, Alaa A.; Walker, Eric L.; Carter, Melissa B.

    2015-01-01

    Flight at speeds greater than the speed of sound is not permitted over land, primarily because of the noise and structural damage caused by sonic boom pressure waves of supersonic aircraft. Mitigation of sonic boom is a key focus area of the High Speed Project under NASA's Fundamental Aeronautics Program. The project is focusing on technologies to enable future civilian aircraft to fly efficiently with reduced sonic boom, engine and aircraft noise, and emissions. A major objective of the project is to improve both computational and experimental capabilities for design of low-boom, high-efficiency aircraft. NASA and industry partners are developing improved wind tunnel testing techniques and new pressure instrumentation to measure the weak sonic boom pressure signatures of modern vehicle concepts. In parallel, computational methods are being developed to provide rapid design and analysis of supersonic aircraft with improved meshing techniques that provide efficient, robust, and accurate on- and off-body pressures at several body lengths from vehicles with very low sonic boom overpressures. The maturity of these critical parallel efforts is necessary before low-boom flight can be demonstrated and commercial supersonic flight can be realized.

  11. NAS technical summaries. Numerical aerodynamic simulation program, March 1992 - February 1993

    NASA Technical Reports Server (NTRS)

    1994-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1992-93 operational year concluded with 399 high-speed processor projects and 91 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  12. Unstructured Grid Euler Method Assessment for Longitudinal and Lateral/Directional Aerodynamic Performance Analysis of the HSR Technology Concept Airplane at Supersonic Cruise Speed

    NASA Technical Reports Server (NTRS)

    Ghaffari, Farhad

    1999-01-01

    Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.

  13. Automatic braking system modification for the Advanced Transport Operating Systems (ATOPS) Transportation Systems Research Vehicle (TSRV)

    NASA Technical Reports Server (NTRS)

    Coogan, J. J.

    1986-01-01

    Modifications were designed for the B-737-100 Research Aircraft autobrake system hardware of the Advanced Transport Operating Systems (ATOPS) Program at Langley Research Center. These modifications will allow the on-board flight control computer to control the aircraft deceleration after landing to a continuously variable level for the purpose of executing automatic high speed turn-offs from the runway. A bread board version of the proposed modifications was built and tested in simulated stopping conditions. Test results, for various aircraft weights, turnoff speed, winds, and runway conditions show that the turnoff speeds are achieved generally with errors less than 1 ft/sec.

  14. Verification and Quantification of Single Event Effects on High Speed SRAM in Terrestrial Environments

    NASA Technical Reports Server (NTRS)

    Huff, H.; You, Z.; Williams, T.; Nichols, T.; Attia, J.; Fogarty, T. N.; Kirby, K.; Wilkins, R.; Lawton, R.

    1998-01-01

    As integrated circuits become more sensitive to charged particles and neutrons, anomalous performance due to single event effects (SEE) is a concern and requires experimental verification and quantification. The Center for Applied Radiation Research (CARR) at Prairie View A&M University has developed experiments as a participant in the NASA ER-2 Flight Program, the APEX balloon flight program and the Student Launch Program. Other high altitude and ground level experiments of interest to DoD and commercial applications are being developed. The experiment characterizes the SEE behavior of high speed and high density SRAM's. The system includes a PC-104 computer unit, an optical drive for storage, a test board with the components under test, and a latchup detection and reset unit. The test program will continuously monitor the stored checkerboard data pattern in the SW and record errors. Since both the computer and the optical drive contain integrated circuits, they are also vulnerable to radiation effects. A latchup detection unit with discrete components will monitor the test program and reset the system when necessary. The first results will be obtained from the NASA ER-2 flights, which are now planned to take place in early 1998 from Dryden Research Center in California. The series of flights, at altitudes up to 70,000 feet, and a variety of flight profiles should yield a distribution of conditions for correlating SEES. SEE measurements will be performed from the time of aircraft power-up on the ground throughout the flight regime until systems power-off after landing.

  15. CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1970-01-01

    The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.

  16. Modeling of rolling element bearing mechanics. Computer program user's manual

    NASA Technical Reports Server (NTRS)

    Greenhill, Lyn M.; Merchant, David H.

    1994-01-01

    This report provides the user's manual for the Rolling Element Bearing Analysis System (REBANS) analysis code which determines the quasistatic response to external loads or displacement of three types of high-speed rolling element bearings: angular contact ball bearings, duplex angular contact ball bearings, and cylindrical roller bearings. The model includes the defects of bearing ring and support structure flexibility. It is comprised of two main programs: the Preprocessor for Bearing Analysis (PREBAN) which creates the input files for the main analysis program, and Flexibility Enhanced Rolling Element Bearing Analysis (FEREBA), the main analysis program. This report addresses input instructions for and features of the computer codes. A companion report addresses the theoretical basis for the computer codes. REBANS extends the capabilities of the SHABERTH (Shaft and Bearing Thermal Analysis) code to include race and housing flexibility, including such effects as dead band and preload springs.

  17. Three Program Architecture for Design Optimization

    NASA Technical Reports Server (NTRS)

    Miura, Hirokazu; Olson, Lawrence E. (Technical Monitor)

    1998-01-01

    In this presentation, I would like to review historical perspective on the program architecture used to build design optimization capabilities based on mathematical programming and other numerical search techniques. It is rather straightforward to classify the program architecture in three categories as shown above. However, the relative importance of each of the three approaches has not been static, instead dynamically changing as the capabilities of available computational resource increases. For example, we considered that the direct coupling architecture would never be used for practical problems, but availability of such computer systems as multi-processor. In this presentation, I would like to review the roles of three architecture from historical as well as current and future perspective. There may also be some possibility for emergence of hybrid architecture. I hope to provide some seeds for active discussion where we are heading to in the very dynamic environment for high speed computing and communication.

  18. A System Approach to Navy Medical Education and Training. Appendix 5. Neuropsychiatric Technician.

    DTIC Science & Technology

    1974-08-31

    phrased behavioral statements. Through the use of special programs, task inventories are prepared, printouts for special purposes are created following ...the Response Guide (p. xiii) at the perforation, and use the correct side to respond to each task or instrument found on the following white pages...response data. They can be processed and manipulated only by high speed computer capability using rigorously designed specialty programs. In addition to

  19. First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop

    NASA Technical Reports Server (NTRS)

    Wood, Richard M. (Editor)

    1999-01-01

    This publication is a compilation of documents presented at the First NASA/Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representative from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.

  20. First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop. Pt. 2

    NASA Technical Reports Server (NTRS)

    Wood, Richard M. (Editor)

    1999-01-01

    This publication is a compilation of documents presented at the First NASA Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representatives from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.

  1. First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop. Part 1

    NASA Technical Reports Server (NTRS)

    Wood, Richard M. (Editor)

    1999-01-01

    This publication is a compilation of documents presented at the First NASA/Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representative from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.

  2. The changing nature of spacecraft operations: From the Vikings of the 1970's to the great observatories of the 1990's and beyond

    NASA Technical Reports Server (NTRS)

    Ledbetter, Kenneth W.

    1992-01-01

    Four trends in spacecraft flight operations are discussed which will reduce overall program costs. These trends are the use of high-speed, highly reliable data communications systems for distributing operations functions to more convenient and cost-effective sites; the improved capability for remote operation of sensors; a continued rapid increase in memory and processing speed of flight qualified computer chips; and increasingly capable ground-based hardware and software systems, notably those augmented by artificial intelligence functions. Changes reflected by these trends are reviewed starting from the NASA Viking missions of the early 70s, when mission control was conducted at one location using expensive and cumbersome mainframe computers and communications equipment. In the 1980s, powerful desktop computers and modems enabled the Magellan project team to operate the spacecraft remotely. In the 1990s, the Hubble Space Telescope project uses multiple color screens and automated sequencing software on small computers. Given a projection of current capabilities, future control centers will be even more cost-effective.

  3. A performance comparison of the Cray-2 and the Cray X-MP

    NASA Technical Reports Server (NTRS)

    Schmickley, Ronald; Bailey, David H.

    1986-01-01

    A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.

  4. A model predictive speed tracking control approach for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Chen, Huiyan; Xiong, Guangming

    2017-03-01

    This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.

  5. Development of high-speed rolling-element bearings. A historical and technical perspective

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.

    1982-01-01

    Research on large-bore ball and roller bearings for aircraft engines is described. Tapered roller bearings and small-bore bearings are discussed. Temperature capabilities of rolling element bearings for aircraft engines have moved from 450 to 589 K (350 to 600 F) with increased reliability. High bearing speeds to 3 million DN can be achieved with a reliability exceeding that which was common in commercial aircraft. Capabilities of available bearing steels and lubricants were defined and established. Computer programs for the analysis and design of rolling element bearings were developed and experimentally verified. The reported work is a summary of NASA contributions to high performance engine and transmission bearing capabilities.

  6. Computer-aided design of antenna structures and components

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1976-01-01

    This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.

  7. The Julia programming language: the future of scientific computing

    NASA Astrophysics Data System (ADS)

    Gibson, John

    2017-11-01

    Julia is an innovative new open-source programming language for high-level, high-performance numerical computing. Julia combines the general-purpose breadth and extensibility of Python, the ease-of-use and numeric focus of Matlab, the speed of C and Fortran, and the metaprogramming power of Lisp. Julia uses type inference and just-in-time compilation to compile high-level user code to machine code on the fly. A rich set of numeric types and extensive numerical libraries are built-in. As a result, Julia is competitive with Matlab for interactive graphical exploration and with C and Fortran for high-performance computing. This talk interactively demonstrates Julia's numerical features and benchmarks Julia against C, C++, Fortran, Matlab, and Python on a spectral time-stepping algorithm for a 1d nonlinear partial differential equation. The Julia code is nearly as compact as Matlab and nearly as fast as Fortran. This material is based upon work supported by the National Science Foundation under Grant No. 1554149.

  8. Effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing.

    PubMed

    Yoo, Won-Gyu

    2015-01-01

    [Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.

  9. New technology in turbine aerodynamics

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.; Moffitt, T. P.

    1972-01-01

    A cursory review is presented of some of the recent work that has been done in turbine aerodynamic research at NASA-Lewis Research Center. Topics discussed include the aerodynamic effect of turbine coolant, high work-factor (ratio of stage work to square of blade speed) turbines, and computer methods for turbine design and performance prediction. An extensive bibliography is included. Experimental cooled-turbine aerodynamics programs using two-dimensional cascades, full annular cascades, and cold rotating turbine stage tests are discussed with some typical results presented. Analytically predicted results for cooled blade performance are compared to experimental results. The problems and some of the current programs associated with the use of very high work factors for fan-drive turbines of high-bypass-ratio engines are discussed. Turbines currently being investigated make use of advanced blading concepts designed to maintain high efficiency under conditions of high aerodynamic loading. Computer programs have been developed for turbine design-point performance, off-design performance, supersonic blade profile design, and the calculation of channel velocities for subsonic and transonic flow fields. The use of these programs for the design and analysis of axial and radial turbines is discussed.

  10. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.

  11. Rotor design of high tip speed low loading transonic fan.

    NASA Technical Reports Server (NTRS)

    Erwin, J. R.; Vitale, N. G.

    1972-01-01

    This paper describes the design concepts, principles and details of a high tip speed transonic rotor having low aerodynamic loading. The purpose of the NASA sponsored investigation was to determine whether good efficiency and large stall margin could be obtained by designing a rotor to avoid flow separation associated with strong normal shocks. Fully supersonic flow through the outboard region of the rotor with compression accomplished by weak oblique shocks were major design concepts employed. Computer programs were written and used to derive blade sections consistent from the all-supersonic tip region to the all-subsonic hub region. Preliminary test results indicate attainment of design pressure ratio and design flow at design speed with about a 1.6 point decrement in efficiency and large stall margin.

  12. Increasing the computational efficient of digital cross correlation by a vectorization method

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Yuan; Ma, Chien-Ching

    2017-08-01

    This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.

  13. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Pirvics, J.

    1980-01-01

    The models and associated mathematics used within the SPHERBEAN computer program for prediction of the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings are presented. The analysis allows six degrees of freedom for each roller and three for each half of an optionally split cage. Roller skew, free lubricant, inertial loads, appropriate elastic and friction forces, and flexible outer ring are considered. Roller quasidynamic equilibrium is calculated for a bearing with up to 30 rollers per row, and distinct roller and flange geometries are specifiable. The user is referred to the material contained here for formulation assumptions and algorithm detail.

  14. FORTRAN program for calculating total efficiency - specific speed characteristics of centrifugal compressors

    NASA Technical Reports Server (NTRS)

    Galvas, M. R.

    1972-01-01

    A computer program for predicting design point specific speed - efficiency characteristics of centrifugal compressors is presented with instructions for its use. The method permits rapid selection of compressor geometry that yields maximum total efficiency for a particular application. A numerical example is included to demonstrate the selection procedure.

  15. Modeling Compressibility Effects in High-Speed Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Sarkar, S.

    2004-01-01

    Man has strived to make objects fly faster, first from subsonic to supersonic and then to hypersonic speeds. Spacecraft and high-speed missiles routinely fly at hypersonic Mach numbers, M greater than 5. In defense applications, aircraft reach hypersonic speeds at high altitude and so may civilian aircraft in the future. Hypersonic flight, while presenting opportunities, has formidable challenges that have spurred vigorous research and development, mainly by NASA and the Air Force in the USA. Although NASP, the premier hypersonic concept of the eighties and early nineties, did not lead to flight demonstration, much basic research and technology development was possible. There is renewed interest in supersonic and hypersonic flight with the HyTech program of the Air Force and the Hyper-X program at NASA being examples of current thrusts in the field. At high-subsonic to supersonic speeds, fluid compressibility becomes increasingly important in the turbulent boundary layers and shear layers associated with the flow around aerospace vehicles. Changes in thermodynamic variables: density, temperature and pressure, interact strongly with the underlying vortical, turbulent flow. The ensuing changes to the flow may be qualitative such as shocks which have no incompressible counterpart, or quantitative such as the reduction of skin friction with Mach number, large heat transfer rates due to viscous heating, and the dramatic reduction of fuel/oxidant mixing at high convective Mach number. The peculiarities of compressible turbulence, so-called compressibility effects, have been reviewed by Fernholz and Finley. Predictions of aerodynamic performance in high-speed applications require accurate computational modeling of these "compressibility effects" on turbulence. During the course of the project we have made fundamental advances in modeling the pressure-strain correlation and developed a code to evaluate alternate turbulence models in the compressible shear layer.

  16. MIDAS, prototype Multivariate Interactive Digital Analysis System, phase 1. Volume 3: Wiring diagrams

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Christenson, D.; Gordon, M.; Kistler, R.; Lampert, S.; Marshall, R.; Mclaughlin, R.

    1974-01-01

    The Midas System is a third-generation, fast, multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS Program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in Phase I of the overall program are described. The system contains a mini-computer to control the various high-speed processing elements in the data path and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 2 x 100,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation. The MIDAS construction and wiring diagrams are given.

  17. A generalized theory for the design of contraction cones and other low speed ducts

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Bowen, J. T.

    1972-01-01

    A generalization of the Tsien method of contraction cone design is described. The design velocity distribution is expressed in such a form that the required high order derivatives can be obtained by recursion rather than by numerical or analytic differentiation. The method is applicable to the design of diffusers and converging-diverging ducts as well as contraction cones. The computer program is described and a FORTRAN listing of the program is provided.

  18. High speed civil transport aerodynamic optimization

    NASA Technical Reports Server (NTRS)

    Ryan, James S.

    1994-01-01

    This is a report of work in support of the Computational Aerosciences (CAS) element of the Federal HPCC program. Specifically, CFD and aerodynamic optimization are being performed on parallel computers. The long-range goal of this work is to facilitate teraflops-rate multidisciplinary optimization of aerospace vehicles. This year's work is targeted for application to the High Speed Civil Transport (HSCT), one of four CAS grand challenges identified in the HPCC FY 1995 Blue Book. This vehicle is to be a passenger aircraft, with the promise of cutting overseas flight time by more than half. To meet fuel economy, operational costs, environmental impact, noise production, and range requirements, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer, controls, and perhaps other disciplines. The fundamental goal of this project is to contribute to improved design tools for U.S. industry, and thus to the nation's economic competitiveness.

  19. High performance systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  20. The Department of Defense Very High Speed Integrated Circuit (VHSIC) Technology Availability Program Plan for the Committees on Armed Services United States Congress.

    DTIC Science & Technology

    1986-06-30

    features of computer aided design systems and statistical quality control procedures that are generic to chip sets and processes. RADIATION HARDNESS -The...System PSP Programmable Signal Processor SSI Small Scale Integration ." TOW Tube Launched, Optically Tracked, Wire Guided TTL Transistor Transitor Logic

  1. Microcomputer software development facilities

    NASA Technical Reports Server (NTRS)

    Gorman, J. S.; Mathiasen, C.

    1980-01-01

    A more efficient and cost effective method for developing microcomputer software is to utilize a host computer with high-speed peripheral support. Application programs such as cross assemblers, loaders, and simulators are implemented in the host computer for each of the microcomputers for which software development is a requirement. The host computer is configured to operate in a time share mode for multiusers. The remote terminals, printers, and down loading capabilities provided are based on user requirements. With this configuration a user, either local or remote, can use the host computer for microcomputer software development. Once the software is developed (through the code and modular debug stage) it can be downloaded to the development system or emulator in a test area where hardware/software integration functions can proceed. The microcomputer software program sources reside in the host computer and can be edited, assembled, loaded, and then downloaded as required until the software development project has been completed.

  2. Multicore Challenges and Benefits for High Performance Scientific Computing

    DOE PAGES

    Nielsen, Ida M. B.; Janssen, Curtis L.

    2008-01-01

    Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexitymore » of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.« less

  3. Democratic Population Decisions Result in Robust Policy-Gradient Learning: A Parametric Study with GPU Simulations

    PubMed Central

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-01-01

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a “non-democratic” mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons “vote” independently (“democratic”) for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. PMID:21572529

  4. A high-speed brain speller using steady-state visual evoked potentials.

    PubMed

    Nakanishi, Masaki; Wang, Yijun; Wang, Yu-Te; Mitsukura, Yasue; Jung, Tzyy-Ping

    2014-09-01

    Implementing a complex spelling program using a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) remains a challenge due to difficulties in stimulus presentation and target identification. This study aims to explore the feasibility of mixed frequency and phase coding in building a high-speed SSVEP speller with a computer monitor. A frequency and phase approximation approach was developed to eliminate the limitation of the number of targets caused by the monitor refresh rate, resulting in a speller comprising 32 flickers specified by eight frequencies (8-15 Hz with a 1 Hz interval) and four phases (0°, 90°, 180°, and 270°). A multi-channel approach incorporating Canonical Correlation Analysis (CCA) and SSVEP training data was proposed for target identification. In a simulated online experiment, at a spelling rate of 40 characters per minute, the system obtained an averaged information transfer rate (ITR) of 166.91 bits/min across 13 subjects with a maximum individual ITR of 192.26 bits/min, the highest ITR ever reported in electroencephalogram (EEG)-based BCIs. The results of this study demonstrate great potential of a high-speed SSVEP-based BCI in real-life applications.

  5. Reynolds Number Effects on a Supersonic Transport at Subsonic High-Lift Conditions (Invited)

    NASA Technical Reports Server (NTRS)

    Owens, L.R.; Wahls, R. A.

    2001-01-01

    A High Speed Civil Transport configuration was tested in the National Transonic Facility at the NASA Langley Research Center as part of NASA's High Speed Research Program. The primary purposes of the tests were to assess Reynolds number scale effects and high Reynolds number aerodynamic characteristics of a realistic, second generation supersonic transport while providing data for the assessment of computational methods. The tests included longitudinal and lateral/directional studies at transonic and low-speed, high-lift conditions across a range of Reynolds numbers from that available in conventional wind tunnels to near flight conditions. Results are presented which focus on Reynolds number and static aeroelastic sensitivities of longitudinal characteristics at Mach 0.30 for a configuration without an empennage. A fundamental change in flow-state occurred between Reynolds numbers of 30 to 40 million, which is characterized by significantly earlier inboard leading-edge separation at the high Reynolds numbers. Force and moment levels change but Reynolds number trends are consistent between the two states.

  6. Overview of mechanics of materials branch activities in the computational structures area

    NASA Technical Reports Server (NTRS)

    Poe, C. C., Jr.

    1992-01-01

    Base programs and system programs are discussed. The base programs include fundamental research of composites and metals for airframes leading to characterization of advanced materials, models of behavior, and methods for predicting damage tolerance. Results from the base programs support the systems programs, which change as NASA's missions change. The National Aerospace Plane (NASP), Advanced Composites Technology (ACT), Airframe Structural Integrity Program (Aging Aircraft), and High Speed Research (HSR) programs are currently being supported. Airframe durability is one of the key issues in each of these system programs. The base program has four major thrusts, which will be reviewed subsequently. Additionally, several technical highlights will be reviewed for each thrust.

  7. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  8. Investigation of chemically-reacting supersonic internal flows

    NASA Technical Reports Server (NTRS)

    Chitsomboon, T.; Tiwari, S. N.

    1985-01-01

    This report covers work done on the research project Analysis and Computation of Internal Flow Field in a Scramjet Engine. The work is supported by the NASA Langley Research Center (Computational Methods Branch of the High-Speed Aerodynamics Division) through research grant NAG1-423. The governing equations of two-dimensional chemically-reacting flows are presented together with the global two-step chemistry model. The finite-difference algorithm used is illustrated and the method of circumventing the stiffness is discussed. The computer program developed is used to solve two model problems of a premixed chemically-reacting flow. The results obtained are physically reasonable.

  9. VNAP2: A Computer Program for Computation of Two-dimensional, Time-dependent, Compressible, Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Cline, M. C.

    1981-01-01

    A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  10. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  11. Enhancement of the Computer Lumber Grading Program to Support Polygonal Defects

    Treesearch

    Powsiri Klinkhachorn; R. Kathari; D. Yost; Philip A. Araman

    1993-01-01

    Computer grading of hardwood lumber promises to avoid regrading of the same lumber because of disagreements between the buyer and the seller. However, the first generation of computer programs for hardwood lumber grading simplify the process by modeling defects on the board as rectangles. This speeds up the grading process buy can inadvertently put a board into a lower...

  12. A MAP fixed-point, packing-unpacking routine for the IBM 7094 computer

    Treesearch

    Robert S. Helfman

    1966-01-01

    Two MAP (Macro Assembly Program) computer routines for packing and unpacking fixed point data are described. Use of these routines with Fortran IV Programs provides speedy access to quantities of data which far exceed the normal storage capacity of IBM 7000-series computers. Many problems that could not be attempted because of the slow access-speed of tape...

  13. An investigation of angular stiffness and damping coefficients of an axial spline coupling in high-speed rotating machinery

    NASA Technical Reports Server (NTRS)

    Ku, C.-P. Roger; Walton, James F., Jr.; Lund, Jorgen W.

    1994-01-01

    This paper provided an opportunity to quantify the angular stiffness and equivalent viscous damping coefficients of an axial spline coupling used in high-speed turbomachinery. A unique test methodology and data reduction procedures were developed. The bending moments and angular deflections transmitted across an axial spline coupling were measured while a nonrotating shaft was excited by an external shaker. A rotor dynamics computer program was used to simulate the test conditions and to correlate the angular stiffness and damping coefficients. In addition, sensitivity analyses were performed to show that the accuracy of the dynamic coefficients do not rely on the accuracy of the data reduction procedures.

  14. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  15. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  16. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  17. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  18. Proceedings from the conference on high speed computing: High speed computing and national security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirons, K.P.; Vigil, M.; Carlson, R.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  19. Computer Output Microfilm and Library Catalogs.

    ERIC Educational Resources Information Center

    Meyer, Richard W.

    Early computers dealt with mathematical and scientific problems requiring very little input and not much output, therefore high speed printing devices were not required. Today with increased variety of use, high speed printing is necessary and Computer Output Microfilm (COM) devices have been created to meet this need. This indirect process can…

  20. High-Speed Recording of Test Data on Hard Disks

    NASA Technical Reports Server (NTRS)

    Lagarde, Paul M., Jr.; Newnan, Bruce

    2003-01-01

    Disk Recording System (DRS) is a systems-integration computer program for a direct-to-disk (DTD) high-speed data acquisition system (HDAS) that records rocket-engine test data. The HDAS consists partly of equipment originally designed for recording the data on tapes. The tape recorders were replaced with hard-disk drives, necessitating the development of DRS to provide an operating environment that ties two computers, a set of five DTD recorders, and signal-processing circuits from the original tape-recording version of the HDAS into one working system. DRS includes three subsystems: (1) one that generates a graphical user interface (GUI), on one of the computers, that serves as a main control panel; (2) one that generates a GUI, on the other computer, that serves as a remote control panel; and (3) a data-processing subsystem that performs tasks on the DTD recorders according to instructions sent from the main control panel. The software affords capabilities for dynamic configuration to record single or multiple channels from a remote source, remote starting and stopping of the recorders, indexing to prevent overwriting of data, and production of filtered frequency data from an original time-series data file.

  1. NAS Technical Summaries, March 1993 - February 1994

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA created the Numerical Aerodynamic Simulation (NAS) Program in 1987 to focus resources on solving critical problems in aeroscience and related disciplines by utilizing the power of the most advanced supercomputers available. The NAS Program provides scientists with the necessary computing power to solve today's most demanding computational fluid dynamics problems and serves as a pathfinder in integrating leading-edge supercomputing technologies, thus benefitting other supercomputer centers in government and industry. The 1993-94 operational year concluded with 448 high-speed processor projects and 95 parallel projects representing NASA, the Department of Defense, other government agencies, private industry, and universities. This document provides a glimpse at some of the significant scientific results for the year.

  2. Is random access memory random?

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.

  3. Closha: bioinformatics workflow system for the analysis of massive sequencing data.

    PubMed

    Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook

    2018-02-19

    While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .

  4. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  5. Compact high-speed scanning lidar system

    NASA Astrophysics Data System (ADS)

    Dickinson, Cameron; Hussein, Marwan; Tripp, Jeff; Nimelman, Manny; Koujelev, Alexander

    2012-06-01

    The compact High Speed Scanning Lidar (HSSL) was designed to meet the requirements for a rover GN&C sensor. The eye-safe HSSL's fast scanning speed, low volume and low power, make it the ideal choice for a variety of real-time and non-real-time applications including: 3D Mapping; Vehicle guidance and Navigation; Obstacle Detection; Orbiter Rendezvous; Spacecraft Landing / Hazard Avoidance. The HSSL comprises two main hardware units: Sensor Head and Control Unit. In a rover application, the Sensor Head mounts on the top of the rover while the Control Unit can be mounted on the rover deck or within its avionics bay. An Operator Computer is used to command the lidar and immediately display the acquired scan data. The innovative lidar design concept was a result of an extensive trade study conducted during the initial phase of an exploration rover program. The lidar utilizes an innovative scanner coupled with a compact fiber laser and high-speed timing electronics. Compared to existing compact lidar systems, distinguishing features of the HSSL include its high accuracy, high resolution, high refresh rate and large field of view. Other benefits of this design include the capability to quickly configure scan settings to fit various operational modes.

  6. EMTP; A powerful tool for analyzing power system transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, W.; Cotcher, D.; Ruiu, D.

    1990-07-01

    This paper reports on the electromagnetic transients program (EMTP), a general purpose computer program for simulating high-speed transient effects in electric power systems. The program features an extremely wide variety of modeling capabilities encompassing electromagnetic and electromechanical oscillations ranging in duration from microseconds to seconds. Examples of its use include switching and lightning surge analysis, insulation coordination, shaft torsional oscillations, ferroresonance, and HVDC converter control and operation. In the late 1960s Hermann Dommel developed the EMTP at Bonneville Power Administration (BPA), which considered the program to be the digital computer replacement for the transient network analyzer. The program initially comprisedmore » about 5000 lines of code, and was useful primarily for transmission line switching studies. As more uses for the program became apparent, BPA coordinated many improvements to the program. As the program grew in versatility and in size, it likewise became more unwieldy and difficult to use. One had to be an EMTP aficionado to take advantage of its capabilities.« less

  7. Big Data over a 100G network at Fermilab

    DOE PAGES

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo; ...

    2014-06-11

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  8. Big Data over a 100G network at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele; Mhashilkar, Parag; Kim, Hyunwoo

    As the need for Big Data in science becomes ever more relevant, networks around the world are upgrading their infrastructure to support high-speed interconnections. To support its mission, the high-energy physics community as a pioneer in Big Data has always been relying on the Fermi National Accelerator Laboratory to be at the forefront of storage and data movement. This need was reiterated in recent years with the data-taking rate of the major LHC experiments reaching tens of petabytes per year. At Fermilab, this resulted regularly in peaks of data movement on the Wide area network (WAN) in and out ofmore » the laboratory of about 30 Gbit/s and on the Local are network (LAN) between storage and computational farms of 160 Gbit/s. To address these ever increasing needs, as of this year Fermilab is connected to the Energy Sciences Network (ESnet) through a 100 Gb/s link. To understand the optimal system-and application-level configuration to interface computational systems with the new highspeed interconnect, Fermilab has deployed a Network Research & Development facility connected to the ESnet 100G Testbed. For the past two years, the High Throughput Data Program (HTDP) has been using the Testbed to identify gaps in data movement middleware [5] when transferring data at these high-speeds. The program has published evaluations of technologies typically used in High Energy Physics, such as GridFTP [4], XrootD [9], and Squid [8]. Furthermore, this work presents the new R&D facility and the continuation of the evaluation program.« less

  9. Genometa--a fast and accurate classifier for short metagenomic shotgun reads.

    PubMed

    Davenport, Colin F; Neugebauer, Jens; Beckmann, Nils; Friedrich, Benedikt; Kameri, Burim; Kokott, Svea; Paetow, Malte; Siekmann, Björn; Wieding-Drewes, Matthias; Wienhöfer, Markus; Wolf, Stefan; Tümmler, Burkhard; Ahlers, Volker; Sprengel, Frauke

    2012-01-01

    Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer. The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.

  10. High-speed assembly language (80386/80387) programming for laser spectra scan control and data acquisition providing improved resolution water vapor spectroscopy

    NASA Technical Reports Server (NTRS)

    Allen, Robert J.

    1988-01-01

    An assembly language program using the Intel 80386 CPU and 80387 math co-processor chips was written to increase the speed of data gathering and processing, and provide control of a scanning CW ring dye laser system. This laser system is used in high resolution (better than 0.001 cm-1) water vapor spectroscopy experiments. Laser beam power is sensed at the input and output of white cells and the output of a Fabry-Perot. The assembly language subroutine is called from Basic, acquires the data and performs various calculations at rates greater than 150 faster than could be performed by the higher level language. The width of output control pulses generated in assembly language are 3 to 4 microsecs as compared to 2 to 3.7 millisecs for those generated in Basic (about 500 to 1000 times faster). Included are a block diagram and brief description of the spectroscopy experiment, a flow diagram of the Basic and assembly language programs, listing of the programs, scope photographs of the computer generated 5-volt pulses used for control and timing analysis, and representative water spectrum curves obtained using these programs.

  11. High-Performance Computing: High-Speed Computer Networks in the United States, Europe, and Japan. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request from the Senate Committee on Commerce, Science, and Transportation, and from the House Committee on Science, Space, and Technology, for information on efforts to develop high-speed computer networks in the United States, Europe (limited to France, Germany, Italy, the Netherlands, and the United…

  12. High speed television camera system processes photographic film data for digital computer analysis

    NASA Technical Reports Server (NTRS)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  13. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  14. Networking via wireless bridge produces greater speed and flexibility, lowers cost.

    PubMed

    1998-10-01

    Wireless computer networking. Computer connectivity is essential in today's high-tech health care industry. But telephone lines aren't fast enough, and high-speed connections like T-1 lines are costly. Read about an Ohio community hospital that installed a wireless network "bridge" to connect buildings that are miles apart, creating a reliable high-speed link that costs one-tenth of a T-1 line.

  15. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In the mid-1980s, Kinetic Systems and Langley Research Center determined that high speed CAMAC (Computer Automated Measurement and Control) data acquisition systems could significantly improve Langley's ARTS (Advanced Real Time Simulation) system. The ARTS system supports flight simulation R&D, and the CAMAC equipment allowed 32 high performance simulators to be controlled by centrally located host computers. This technology broadened Kinetic Systems' capabilities and led to several commercial applications. One of them is General Atomics' fusion research program. Kinetic Systems equipment allows tokamak data to be acquired four to 15 times more rapidly. Ford Motor company uses the same technology to control and monitor transmission testing facilities.

  16. An efficient 3-dim FFT for plane wave electronic structure calculations on massively parallel machines composed of multiprocessor nodes

    NASA Astrophysics Data System (ADS)

    Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry

    2003-08-01

    Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.

  17. Linear programming computational experience with onyx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atrek, E.

    1994-12-31

    ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.

  18. A computer program for helicopter rotor noise using Lowson's formula in the time domain

    NASA Technical Reports Server (NTRS)

    Parks, C. L.

    1975-01-01

    A computer program (D3910) was developed to calculate both the far field and near field acoustic pressure signature of a tilted rotor in hover or uniform forward speed. The analysis, carried out in the time domain, is based on Lowson's formulation of the acoustic field of a moving force. The digital computer program is described, including methods used in the calculations, a flow chart, program D3910 source listing, instructions for the user, and two test cases with input and output listings and output plots.

  19. Language Analysis Package (L.A.P.) Version I System Design.

    ERIC Educational Resources Information Center

    Porch, Ann

    To permit researchers to use the speed and versatility of the computer to process natural language text as well as numerical data without undergoing special training in programing or computer operations, a language analysis package has been developed partially based on several existing programs. An overview of the design is provided and system…

  20. Grammaire: L'informatique a la rescousse (Grammar: Computer Technology to the Rescue).

    ERIC Educational Resources Information Center

    Malandain, Jean-Louis

    1990-01-01

    The use of computer software to teach grammatical constructions faster by developing good linguistic "reflexes" is described. The program has three levels: choice of gender determiner; impact of the initial letter of the word on the determiner's form; and placement of adjectives. The program also provides reinforcement for speed of…

  1. Improvement and speed optimization of numerical tsunami modelling program using OpenMP technology

    NASA Astrophysics Data System (ADS)

    Chernov, A.; Zaytsev, A.; Yalciner, A.; Kurkin, A.

    2009-04-01

    Currently, the basic problem of tsunami modeling is low speed of calculations which is unacceptable for services of the operative notification. Existing algorithms of numerical modeling of hydrodynamic processes of tsunami waves are developed without taking the opportunities of modern computer facilities. There is an opportunity to have considerable acceleration of process of calculations by using parallel algorithms. We discuss here new approach to parallelization tsunami modeling code using OpenMP Technology (for multiprocessing systems with the general memory). Nowadays, multiprocessing systems are easily accessible for everyone. The cost of the use of such systems becomes much lower comparing to the costs of clusters. This opportunity also benefits all programmers to apply multithreading algorithms on desktop computers of researchers. Other important advantage of the given approach is the mechanism of the general memory - there is no necessity to send data on slow networks (for example Ethernet). All memory is the common for all computing processes; it causes almost linear scalability of the program and processes. In the new version of NAMI DANCE using OpenMP technology and multi-threading algorithm provide 80% gain in speed in comparison with the one-thread version for dual-processor unit. The speed increased and 320% gain was attained for four core processor unit of PCs. Thus, it was possible to reduce considerably time of performance of calculations on the scientific workstations (desktops) without complete change of the program and user interfaces. The further modernization of algorithms of preparation of initial data and processing of results using OpenMP looks reasonable. The final version of NAMI DANCE with the increased computational speed can be used not only for research purposes but also in real time Tsunami Warning Systems.

  2. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  3. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  4. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  5. User manual for Streamtube Curvature Analysis: Analytical method for predicting the pressure distribution about a nacelle at transonic speeds, appendix

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Heck, P. H.

    1973-01-01

    The computer program listing of Streamtube Curvature Analysis is presented. The listing includes explanatory statements and titles so that the program flow is readily discernable. The computer program listing is in CDC FORTRAN 2.3 source language form, except for three subroutines, GETIX, GETRLX, and SAVIX, which are in COMPOSE 1.1 language.

  6. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.

  7. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926

  8. For operation of the Computer Software Management and Information Center (COSMIC)

    NASA Technical Reports Server (NTRS)

    Carmon, J. L.

    1983-01-01

    Computer programs for degaussing, magnetic field calculation, low speed wing flap systems aerodynamics, structural panel analysis, dynamic stress/strain data acquisition, allocation and network scheduling, and digital filters are discussed.

  9. The 1995 NASA High-Speed Research Program Sonic Boom Workshop. Volume 1

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1996-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Sonic Boom Workshop on September 12-13, 1995. The workshop was designed to bring together NASAs scientists and engineers and their counterparts in industry, other Government agencies, and academia working together in the sonic boom element of NASAs High-Speed Research Program. Specific objectives of this workshop were to (1) report the progress and status of research in sonic boom propagation, acceptability, and design; (2) promote and disseminate this technology within the appropriate technical communities; (3) help promote synergy among the scientists working in the Program; and (4) identify technology pacing the development of viable reduced-boom High-Speed Civil Transport concepts. The Workshop included these sessions: Session 1 - Sonic Boom Propagation (Theoretical); Session 2 - Sonic Boom Propagation (Experimental); and Session 3 - Acceptability Studies - Human and Animal.

  10. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  11. HSCT4.0 Application: Software Requirements Specification

    NASA Technical Reports Server (NTRS)

    Salas, A. O.; Walsh, J. L.; Mason, B. H.; Weston, R. P.; Townsend, J. C.; Samareh, J. A.; Green, L. L.

    2001-01-01

    The software requirements for the High Performance Computing and Communication Program High Speed Civil Transport application project, referred to as HSCT4.0, are described. The objective of the HSCT4.0 application project is to demonstrate the application of high-performance computing techniques to the problem of multidisciplinary design optimization of a supersonic transport configuration, using high-fidelity analysis simulations. Descriptions of the various functions (and the relationships among them) that make up the multidisciplinary application as well as the constraints on the software design arc provided. This document serves to establish an agreement between the suppliers and the customer as to what the HSCT4.0 application should do and provides to the software developers the information necessary to design and implement the system.

  12. Computer-Aided Design/Manufacturing (CAD/M) for High-Speed Interconnect.

    DTIC Science & Technology

    1981-10-01

    are frequency sensitive and hence lend themselves to frequency domain ananlysis . Most of the classical microwave analysis is handled in the frequency ...capability integrated into a time-domain analysis program. This approach allows determination of frequency -dependent transmission line (interconnect...the items to consider in any interconnect study is that of the frequency range of interest. This determines whether the interconnections must be treated

  13. Hypersonic research engine/aerothermodynamic integration model: Experimental results. Volume 3: Mach 7 component integration and performance

    NASA Technical Reports Server (NTRS)

    Andrews, E. H., Jr.; Mackley, E. A.

    1976-01-01

    The NASA Hypersonic Research Engine Project was undertaken to design, develop, and construct a hypersonic research ramjet engine for high performance and to flight test the developed concept on the X-15-2A airplane over the speed range from Mach 3 to 8. Computer program results are presented here for the Mach 7 component integration and performance tests.

  14. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching Yuen

    1991-01-01

    A new Lagrangian formulation of the Euler equation is adopted for the calculation of 2-D supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, a better than six times speed-up was achieved on a 8192-processor CM-2 over a single processor of a CRAY-2.

  15. Parallel computing using a Lagrangian formulation

    NASA Technical Reports Server (NTRS)

    Liou, May-Fun; Loh, Ching-Yuen

    1992-01-01

    This paper adopts a new Lagrangian formulation of the Euler equation for the calculation of two dimensional supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, we have achieved better than six times speed-up on a 8192-processor CM-2 over a single processor of a CRAY-2.

  16. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  17. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  18. 76 FR 8397 - Environmental Impact Statement for the Chicago, IL to St. Louis, MO High Speed Rail Program Corridor

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-14

    ... the Chicago, IL to St. Louis, MO High Speed Rail Program Corridor AGENCY: Federal Railroad... (EIS) for the Chicago, IL to St. Louis, MO High Speed Rail Corridor Program in compliance with the... Joliet and St. Louis to support additional passenger trains. The EIS will consider increasing the number...

  19. H.R. 1757--High Performance Computing and High Speed Networking Applications Act of 1993. Hearings before the Subcommittee on Science of the Committee on Science, Space, and Technology. House of Representatives, One Hundred Third Congress, First Session (April 27, May 6, May 11, 1993).

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.

    This document contains the transcript of three hearings on the High Speed Performance Computing and High Speed Networking Applications Act of 1993 (H.R. 1757). The hearings were designed to obtain specific suggestions for improvements to the legislation and alternative or additional application areas that should be pursued. Testimony and prepared…

  20. Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.

    PubMed

    Skoneczny, Szymon

    2015-01-01

    The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.

  1. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerfler, Douglas; Austin, Brian; Cook, Brandon

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL,more » such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.« less

  2. The determination of equivalent bearing loading for the BSMT that simulate SSME high pressure oxidizer turbopump conditions using the SHABERTH/SINDA computer programs

    NASA Technical Reports Server (NTRS)

    Mcdonald, Gary H.

    1987-01-01

    The MSFC bearing seal material tester (BSMT) can be used to evaluate the SSME high pressure oxygen turbopump (HPOTP) bearing performance. The four HPOTP bearings have both an imposed radial and axial load. These radial and axial loads are caused by the HPOTP's shaft, main impeller, preburner impeller, turbine and by the LOX coolant flow through the bearings, respectively. These loads coupled with bearing geometry and operating speed can define bearing contact angle, contact Hertz stress, and heat generation rates. The BSMT has the capability of operating at HPOTP shaft speeds, provide proper coolant flowrates but can only apply an axial load. Due to the inability to operate the bearings in the BSMT with an applied radial load, it is important to develop an equivalency between the applied axial loads and the actual HPOTP loadings. A shaft-bearing-thermal computer code (SHABERTH/SINDA) is used to simulate the BSMT bearing-shaft geometry and thermal-fluid operating conditions.

  3. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA-based architectures for highly parallel and systolic computation of signal/image processing applications, such as FFT and Wavelet and Wlash-Hadamard Transforms.

  4. Circulation control propellers for general aviation, including a BASIC computer program

    NASA Technical Reports Server (NTRS)

    Taback, I.; Braslow, A. L.; Butterfield, A. J.

    1983-01-01

    The feasibility of replacing variable pitch propeller mechanisms with circulation control (Coanada effect) propellers on general aviation airplanes was examined. The study used a specially developed computer program written in BASIC which could compare the aerodynamic performance of circulation control propellers with conventional propellers. The comparison of aerodynamic performance for circulation control, fixed pitch and variable pitch propellers is based upon the requirements for a 1600 kg (3600 lb) single engine general aviation aircraft. A circulation control propeller using a supercritical airfoil was shown feasible over a representative range of design conditions. At a design condition for high speed cruise, all three types of propellers showed approximately the same performance. At low speed, the performance of the circulation control propeller exceeded the performance for a fixed pitch propeller, but did not match the performance available from a variable pitch propeller. It appears feasible to consider circulation control propellers for single engine aircraft or multiengine aircraft which have their propellers on a common axis (tractor pusher). The economics of the replacement requires a study for each specific airplane application.

  5. Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.

  6. Improved Processing Speed: Online Computer-Based Cognitive Training in Older Adults

    ERIC Educational Resources Information Center

    Simpson, Tamara; Camfield, David; Pipingas, Andrew; Macpherson, Helen; Stough, Con

    2012-01-01

    In an increasingly aging population, a number of adults are concerned about declines in their cognitive abilities. Online computer-based cognitive training programs have been proposed as an accessible means by which the elderly may improve their cognitive abilities; yet, more research is needed in order to assess the efficacy of these programs. In…

  7. Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G

    NASA Technical Reports Server (NTRS)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1972-01-01

    New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.

  8. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  9. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  10. Acceleration of Radiance for Lighting Simulation by Using Parallel Computing with OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Wangda; McNeil, Andrew; Wetter, Michael

    2011-09-06

    We report on the acceleration of annual daylighting simulations for fenestration systems in the Radiance ray-tracing program. The algorithm was optimized to reduce both the redundant data input/output operations and the floating-point operations. To further accelerate the simulation speed, the calculation for matrix multiplications was implemented using parallel computing on a graphics processing unit. We used OpenCL, which is a cross-platform parallel programming language. Numerical experiments show that the combination of the above measures can speed up the annual daylighting simulations 101.7 times or 28.6 times when the sky vector has 146 or 2306 elements, respectively.

  11. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  12. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  13. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  14. Computational Fluid Dynamics Program at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1989-01-01

    The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.

  15. Application of advanced grid generation techniques for flow field computations about complex configurations

    NASA Technical Reports Server (NTRS)

    Kathong, Monchai; Tiwari, Surendra N.

    1988-01-01

    In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.

  16. Estimation of wing nonlinear aerodynamic characteristics at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.; Mack, R. J.

    1980-01-01

    A computational system for estimation of nonlinear aerodynamic characteristics of wings at supersonic speeds was developed and was incorporated in a computer program. This corrected linearized theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading edge thrust, and provides an estimate of detached leading edge vortex loadings that result when the theoretical thrust forces are not fully realized.

  17. Survivability of the Hardened Mobile Launcher When Attacked by a Hypothetical Rapidly Retargetable ICBM System.

    DTIC Science & Technology

    1986-03-01

    Aimpoints 22 Overviev 22 Random Movement of the RML 23 Computing Burst Locations and the HMIL’s Final Location 23 Selecting the HIMLs Speed. 29...described threat. The actual model used in this study is an MEASIC computer program . written and run on an Apple Macintosh computer . It is described in...mechanics of the computer program that models the warheads’ flight time sequence, it will be helpful to explain some of the elements of the sequence

  18. Polarization Imaging Apparatus

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin K.; Chen, Qiushui

    2010-01-01

    A polarization imaging apparatus has shown promise as a prototype of instruments for medical imaging with contrast greater than that achievable by use of non-polarized light. The underlying principles of design and operation are derived from observations that light interacts with tissue ultrastructures that affect reflectance, scattering, absorption, and polarization of light. The apparatus utilizes high-speed electro-optical components for generating light properties and acquiring polarization images through aligned polarizers. These components include phase retarders made of OptoCeramic (registered TradeMark) material - a ceramic that has a high electro-optical coefficient. The apparatus includes a computer running a program that implements a novel algorithm for controlling the phase retarders, capturing image data, and computing the Stokes polarization images. Potential applications include imaging of superficial cancers and other skin lesions, early detection of diseased cells, and microscopic analysis of tissues. The high imaging speed of this apparatus could be beneficial for observing live cells or tissues, and could enable rapid identification of moving targets in astronomy and national defense. The apparatus could also be used as an analysis tool in material research and industrial processing.

  19. Safety of High Speed Ground Transportation Systems : Analytical Methodology for Safety Validation of Computer Controlled Subsystems : Volume 2. Development of a Safety Validation Methodology

    DOT National Transportation Integrated Search

    1995-01-01

    This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...

  20. Gas and particle motions in a rapidly decompressed flow

    NASA Astrophysics Data System (ADS)

    Johnson, Blair; Zunino, Heather; Adrian, Ronald; Clarke, Amanda

    2017-11-01

    To understand the behavior of a rapidly decompressed particle bed in response to a shock, an experimental study is performed in a cylindrical (D = 4.1 cm) glass vertical shock tube of a densely packed (ρ = 61%) particle bed. The bed is comprised of spherical glass particles, ranging from D50 = 44-297 μm between experiments. High-speed pressure sensors are incorporated to capture shock speeds and strengths. High-speed video and particle image velocimetry (PIV) measurements are collected to examine vertical and radial velocities of both the particles and gas to elucidate features of the shock wave and resultant expansion wave in the lateral center of the tube, away from boundaries. In addition to optically analyzing the front velocity of the rising particle bed, interaction between the particle and gas phases are investigated as the flow accelerates and the particle front becomes more dilute. Particle and gas interactions are also considered in exploring mechanisms through which turbulence develops in the flow. This work is supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science and Academic Alliance Program, under Contract No. DE-NA0002378.

  1. Experiments with microcomputer-based artificial intelligence environments

    USGS Publications Warehouse

    Summers, E.G.; MacDonald, R.A.

    1988-01-01

    The U.S. Geological Survey (USGS) has been experimenting with the use of relatively inexpensive microcomputers as artificial intelligence (AI) development environments. Several AI languages are available that perform fairly well on desk-top personal computers, as are low-to-medium cost expert system packages. Although performance of these systems is respectable, their speed and capacity limitations are questionable for serious earth science applications foreseen by the USGS. The most capable artificial intelligence applications currently are concentrated on what is known as the "artificial intelligence computer," and include Xerox D-series, Tektronix 4400 series, Symbolics 3600, VAX, LMI, and Texas Instruments Explorer. The artificial intelligence computer runs expert system shells and Lisp, Prolog, and Smalltalk programming languages. However, these AI environments are expensive. Recently, inexpensive 32-bit hardware has become available for the IBM/AT microcomputer. USGS has acquired and recently completed Beta-testing of the Gold Hill Systems 80386 Hummingboard, which runs Common Lisp on an IBM/AT microcomputer. Hummingboard appears to have the potential to overcome many of the speed/capacity limitations observed with AI-applications on standard personal computers. USGS is a Beta-test site for the Gold Hill Systems GoldWorks expert system. GoldWorks combines some high-end expert system shell capabilities in a medium-cost package. This shell is developed in Common Lisp, runs on the 80386 Hummingboard, and provides some expert system features formerly available only on AI-computers including frame and rule-based reasoning, on-line tutorial, multiple inheritance, and object-programming. ?? 1988 International Association for Mathematical Geology.

  2. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    NASA Astrophysics Data System (ADS)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  3. 75 FR 16552 - High-Speed Intercity Passenger Rail (HSIPR) Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ..., energy savings from traffic diversions from other modes, employment of green building and manufacturing... selections for the High-Speed Intercity Passenger Rail (HSIPR) Program. This notice builds on the program...

  4. 1995 NASA High-Speed Research Program Sonic Boom Workshop. Volume 2; Configuration Design, Analysis, and Testing

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Sonic Boom Workshop on September 12-13, 1995. The workshop was designed to bring together NASAs scientists and engineers and their counterparts in industry, other Government agencies, and academia working together in the sonic boom element of NASAs High-Speed Research Program. Specific objectives of this workshop were to: (1) report the progress and status of research in sonic boom propagation, acceptability, and design; (2) promote and disseminate this technology within the appropriate technical communities; (3) help promote synergy among the scientists working in the Program; and (4) identify technology pacing, the development C, of viable reduced-boom High-Speed Civil Transport concepts. The Workshop was organized in four sessions: Sessions 1 Sonic Boom Propagation (Theoretical); Session 2 Sonic Boom Propagation (Experimental); Session 3 Acceptability Studies-Human and Animal; and Session 4 - Configuration Design, Analysis, and Testing.

  5. Automatic Data Processing Equipment (ADPE) acquisition plan for the medical sciences

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An effective mechanism for meeting the SLSD/MSD data handling/processing requirements for Shuttle is discussed. The ability to meet these requirements depends upon the availability of a general purpose high speed digital computer system. This system is expected to implement those data base management and processing functions required across all SLSD/MSD programs during training, laboratory operations/analysis, simulations, mission operations, and post mission analysis/reporting.

  6. Particle trajectory computer program for icing analysis of axisymmetric bodies

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Kimble, Kenneth R.

    1982-01-01

    General aviation aircraft and helicopters exposed to an icing environment can accumulate ice resulting in a sharp increase in drag and reduction of maximum lift causing hazardous flight conditions. NASA Lewis Research Center (LeRC) is conducting a program to examine, with the aid of high-speed computer facilities, how the trajectories of particles contribute to the ice accumulation on airfoils and engine inlets. This study, as part of the NASA/LeRC research program, develops a computer program for the calculation of icing particle trajectories and impingement limits relative to axisymmetric bodies in the leeward-windward symmetry plane. The methodology employed in the current particle trajectory calculation is to integrate the governing equations of particle motion in a flow field computed by the Douglas axisymmetric potential flow program. The three-degrees-of-freedom (horizontal, vertical, and pitch) motion of the particle is considered. The particle is assumed to be acted upon by aerodynamic lift and drag forces, gravitational forces, and for nonspherical particles, aerodynamic moments. The particle momentum equation is integrated to determine the particle trajectory. Derivation of the governing equations and the method of their solution are described in Section 2.0. General features, as well as input/output instructions for the particle trajectory computer program, are described in Section 3.0. The details of the computer program are described in Section 4.0. Examples of the calculation of particle trajectories demonstrating application of the trajectory program to given axisymmetric inlet test cases are presented in Section 5.0. For the examples presented, the particles are treated as spherical water droplets. In Section 6.0, limitations of the program relative to excessive computer time and recommendations in this regard are discussed.

  7. Information Management for a Large Multidisciplinary Project

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Randall, Donald P.; Cronin, Catherine K.

    1992-01-01

    In 1989, NASA's Langley Research Center (LaRC) initiated the High-Speed Airframe Integration Research (HiSAIR) Program to develop and demonstrate an integrated environment for high-speed aircraft design using advanced multidisciplinary analysis and optimization procedures. The major goals of this program were to evolve the interactions among disciplines and promote sharing of information, to provide a timely exchange of information among aeronautical disciplines, and to increase the awareness of the effects each discipline has upon other disciplines. LaRC historically has emphasized the advancement of analysis techniques. HiSAIR was founded to synthesize these advanced methods into a multidisciplinary design process emphasizing information feedback among disciplines and optimization. Crucial to the development of such an environment are the definition of the required data exchanges and the methodology for both recording the information and providing the exchanges in a timely manner. These requirements demand extensive use of data management techniques, graphic visualization, and interactive computing. HiSAIR represents the first attempt at LaRC to promote interdisciplinary information exchange on a large scale using advanced data management methodologies combined with state-of-the-art, scientific visualization techniques on graphics workstations in a distributed computing environment. The subject of this paper is the development of the data management system for HiSAIR.

  8. Symplectic molecular dynamics simulations on specially designed parallel computers.

    PubMed

    Borstnik, Urban; Janezic, Dusanka

    2005-01-01

    We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.

  9. Optical Design Using Small Dedicated Computers

    NASA Astrophysics Data System (ADS)

    Sinclair, Douglas C.

    1980-09-01

    Since the time of the 1975 International Lens Design Conference, we have developed a series of optical design programs for Hewlett-Packard desktop computers. The latest programs in the series, OSLO-25G and OSLO-45G, have most of the capabilities of general-purpose optical design programs, including optimization based on exact ray-trace data. The computational techniques used in the programs are similar to ones used in other programs, but the creative environment experienced by a designer working directly with these small dedicated systems is typically much different from that obtained with shared-computer systems. Some of the differences are due to the psychological factors associated with using a system having zero running cost, while others are due to the design of the program, which emphasizes graphical output and ease of use, as opposed to computational speed.

  10. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    PubMed

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  11. Three-dimensional vector modeling and restoration of flat finite wave tank radiometric measurements

    NASA Technical Reports Server (NTRS)

    Truman, W. M.; Balanis, C. A.

    1977-01-01

    The three-dimensional vector interaction between a microwave radiometer and a wave tank was modeled. Computer programs for predicting the response of the radiometer to the brightness temperature characteristics of the surroundings were developed along with a computer program that can invert (restore) the radiometer measurements. It is shown that the computer programs can be used to simulate the viewing of large bodies of water, and is applicable to radiometer measurements received from satellites monitoring the ocean. The water temperature, salinity, and wind speed can be determined.

  12. Computer analysis speeds corrugated horn design

    NASA Technical Reports Server (NTRS)

    Loefer, G. R.; Newton, J. M.; Schuchardt, J. M.; Dees, J. W.

    1976-01-01

    A computer analysis program is developed for selecting the optimum flare angle and horn length of a corrugated horn design, the horn diameter, and the radiation pattern, before resorting to machining operations. The calculated antenna pattern is best suited to narrowband designs, and averaging of the E and H planes is recommended for wideband work. The program language used is BASIC. Some design examples are provided with representative data, printouts, and a rundown of the equations programmed.

  13. Differences in energy expenditure during high-speed versus standard-speed yoga: A randomized sequence crossover trial.

    PubMed

    Potiaumpai, Melanie; Martins, Maria Carolina Massoni; Rodriguez, Roberto; Mooney, Kiersten; Signorile, Joseph F

    2016-12-01

    To compare energy expenditure and volume of oxygen consumption and carbon dioxide production during a high-speed yoga and a standard-speed yoga program. Randomized repeated measures controlled trial. A laboratory of neuromuscular research and active aging. Sun-Salutation B was performed, for eight minutes, at a high speed versus and a standard-speed separately while oxygen consumption was recorded. Caloric expenditure was calculated using volume of oxygen consumption and carbon dioxide production. Difference in energy expenditure (kcal) of HSY and SSY. Significant differences were observed in energy expenditure between yoga speeds with high-speed yoga producing significantly higher energy expenditure than standard-speed yoga (MD=18.55, SE=1.86, p<0.01). Significant differences were also seen between high-speed and standard-speed yoga for volume of oxygen consumed and carbon dioxide produced. High-speed yoga results in a significantly greater caloric expenditure than standard-speed yoga. High-speed yoga may be an effective alternative program for those targeting cardiometabolic markers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. User's guide for ENSAERO: A multidisciplinary program for fluid/structural/control interaction studies of aircraft (release 1)

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    1994-01-01

    Strong interactions can occur between the flow about an aerospace vehicle and its structural components resulting in several important aeroelastic phenomena. These aeroelastic phenomena can significantly influence the performance of the vehicle. At present, closed-form solutions are available for aeroelastic computations when flows are in either the linear subsonic or supersonic range. However, for aeroelasticity involving complex nonlinear flows with shock waves, vortices, flow separations, and aerodynamic heating, computational methods are still under development. These complex aeroelastic interactions can be dangerous and limit the performance of aircraft. Examples of these detrimental effects are aircraft with highly swept wings experiencing vortex-induced aeroelastic oscillations, transonic regime at which the flutter speed is low, aerothermoelastic loads that play a critical role in the design of high-speed vehicles, and flow separations that often lead to buffeting with undesirable structural oscillations. The simulation of these complex aeroelastic phenomena requires an integrated analysis of fluids and structures. This report presents a summary of the development, applications, and procedures to use the multidisciplinary computer code ENSAERO. This code is based on the Euler/Navier-Stokes flow equations and modal/finite-element structural equations.

  15. A cloud-based electronic medical record for scheduling, tracking, and documenting examinations and treatment of retinopathy of prematurity.

    PubMed

    Arnold, Robert W; Jacob, Jack; Matrix, Zinnia

    2012-01-01

    Screening by neonatologists and staging by ophthalmologists is a cost-effective intervention, but inadvertent missed examinations create a high liability. Paper tracking, bedside schedule reminders, and a computer scheduling and reminder program were compared for speed of input and retrospective missed examination rate. A neonatal intensive care unit (NICU) process was then programmed for cloud-based distribution for inpatient and outpatient retinopathy of prematurity monitoring. Over 11 years, 367 premature infants in one NICU were prospectively monitored. The initial paper system missed 11% of potential examinations, the Windows server-based system missed 2%, and the current cloud-based system missed 0% of potential inpatient and outpatient examinations. Computer input of examinations took the same or less time than paper recording. A computer application with a deliberate NICU process improved the proportion of eligible neonates getting their scheduled eye examinations in a timely manner. Copyright 2012, SLACK Incorporated.

  16. The design and implementation of CRT displays in the TCV real-time simulation

    NASA Technical Reports Server (NTRS)

    Leavitt, J. B.; Tariq, S. I.; Steinmetz, G. G.

    1975-01-01

    The design and application of computer graphics to the Terminal Configured Vehicle (TCV) program were described. A Boeing 737-100 series aircraft was modified with a second flight deck and several computers installed in the passenger cabin. One of the elements in support of the TCV program is a sophisticated simulation system developed to duplicate the operation of the aft flight deck. This facility consists of an aft flight deck simulator, equipped with realistic flight instrumentation, a CDC 6600 computer, and an Adage graphics terminal; this terminal presents to the simulator pilot displays similar to those used on the aircraft with equivalent man-machine interactions. These two displays form the primary flight instrumentation for the pilot and are dynamic images depicting critical flight information. The graphics terminal is a high speed interactive refresh-type graphics system. To support the cockpit display, two remote CRT's were wired in parallel with two of the Adage scopes.

  17. Separated flow over bodies of revolution using an unsteady discrete-vorticity cross wake. Part 2: Computer program description

    NASA Technical Reports Server (NTRS)

    Marshall, F. J.; Deffenbaugh, F. D.

    1974-01-01

    A method is developed to determine the flow field of a body of revolution in separated flow. The computer was used to integrate various solutions and solution properties of the sub-flow fields which made up the entire flow field without resorting to a finite difference solution to the complete Navier-Stokes equations. The technique entails the use of the unsteady cross flow analogy and a new solution to the two-dimensional unsteady separated flow problem based upon an unsteady, discrete-vorticity wake. Data for the forces and moments on aerodynamic bodies at low speeds and high angle of attack (outside the range of linear inviscid theories) such that the flow is substantially separated are produced which compare well with experimental data. In addition, three dimensional steady separated regions and wake vortex patterns are determined. The computer program developed to perform the numerical calculations is described.

  18. Propeller speed and phase sensor

    NASA Technical Reports Server (NTRS)

    Collopy, Paul D. (Inventor); Bennett, George W. (Inventor)

    1992-01-01

    A speed and phase sensor counterrotates aircraft propellers. A toothed wheel is attached to each propeller, and the teeth trigger a sensor as they pass, producing a sequence of signals. From the sequence of signals, rotational speed of each propeller is computer based on time intervals between successive signals. The speed can be computed several times during one revolution, thus giving speed information which is highly up-to-date. Given that spacing between teeth may not be uniform, the signals produced may be nonuniform in time. Error coefficients are derived to correct for nonuniformities in the resulting signals, thus allowing accurate speed to be computed despite the spacing nonuniformities. Phase can be viewed as the relative rotational position of one propeller with respect to the other, but measured at a fixed time. Phase is computed from the signals.

  19. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Structure and Utility of Blind Speed Intervals Associated with Doppler Measurements of Range Rate

    DTIC Science & Technology

    1993-02-01

    computer programming concepts of speed, memory , and data structures that can be exploited to fabricate efficient software realizations of two phase range...to the effect that it is possible to derive a reasonable unambiguous estimate of range rate from the measurement of the pulse-to-pulse phase shift in...the properties of the blind speed intervals generated by the base speeds involved in two measurement equations. Sections 12 through 18 make up the

  1. Multi-Objective Aerodynamic Optimization of the Streamlined Shape of High-Speed Trains Based on the Kriging Model.

    PubMed

    Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei

    2017-01-01

    Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.

  2. An experimental investigation of a Mach 3.0 high-speed civil transport at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Hernandez, Gloria; Covell, Peter F.; Mcgraw, Marvin E., Jr.

    1993-01-01

    An experimental study was conducted to determine the aerodynamic characteristics of a proposed high speed civil transport. This configuration was designed to cruise at Mach 3.0 and sized to carry 250 passengers for 6500 n.mi. The configuration consists of a highly blended wing body and features a blunt parabolic nose planform, a highly swept inboard wing panel, a moderately swept outboard wing panel, and a curved wingtip. Wind tunnel tests were conducted in the Langley Unitary Plan Wind Tunnel on a 0.0098-scale model. Force, moment, and pressure data were obtained for Mach numbers ranging from 1.6 to 3.6 and at angles of attack ranging from -4 to 10 deg. Extensive flow visualization studies (vapor screen and oil flow) were obtained in the experimental program. Both linear and advanced computational fluid dynamics (CFD) theoretical comparisons are shown to assess the ability to predict forces, moments, and pressures on configurations of this type. In addition, an extrapolation of the wind tunnel data, based on empirical principles, to full-scale conditions is compared with the theoretical aerodynamic predictions.

  3. Mars rover local navigation and hazard avoidance

    NASA Technical Reports Server (NTRS)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-01-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  4. Mars Rover Local Navigation And Hazard Avoidance

    NASA Astrophysics Data System (ADS)

    Wilcox, B. H.; Gennery, D. B.; Mishkin, A. H.

    1989-03-01

    A Mars rover sample return mission has been proposed for the late 1990's. Due to the long speed-of-light delays between Earth and Mars, some autonomy on the rover is highly desirable. JPL has been conducting research in two possible modes of rover operation, Computer-Aided Remote Driving and Semiautonomous Navigation. A recently-completed research program used a half-scale testbed vehicle to explore several of the concepts in semiautonomous navigation. A new, full-scale vehicle with all computational and power resources on-board will be used in the coming year to demonstrate relatively fast semiautonomous navigation. The computational and power requirements for Mars rover local navigation and hazard avoidance are discussed.

  5. FORTRAN program for predicting off-design performance of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Wasserbauer, C. A.; Glassman, A. J.

    1975-01-01

    The FORTRAN IV program uses a one-dimensional solution of flow conditions through the turbine along the mean streamline. The program inputs needed are the design-point requirements and turbine geometry. The output includes performance and velocity-diagram parameters over a range of speed and pressure ratio. Computed performance is compared with the experimental data from two radial-inflow turbines and with the performance calculated by a previous computer program. The flow equations, program listing, and input and output for a sample problem are given.

  6. Low latency and persistent data storage

    DOEpatents

    Fitch, Blake G; Franceschini, Michele M; Jagmohan, Ashish; Takken, Todd

    2014-11-04

    Persistent data storage is provided by a computer program product that includes computer program code configured for receiving a low latency store command that includes write data. The write data is written to a first memory device that is implemented by a nonvolatile solid-state memory technology characterized by a first access speed. It is acknowledged that the write data has been successfully written to the first memory device. The write data is written to a second memory device that is implemented by a volatile memory technology. At least a portion of the data in the first memory device is written to a third memory device when a predetermined amount of data has been accumulated in the first memory device. The third memory device is implemented by a nonvolatile solid-state memory technology characterized by a second access speed that is slower than the first access speed.

  7. Guidelines for developing vectorizable computer programs

    NASA Technical Reports Server (NTRS)

    Miner, E. W.

    1982-01-01

    Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.

  8. Large-scale Advanced Prop-fan (LAP) high speed wind tunnel test report

    NASA Technical Reports Server (NTRS)

    Campbell, William A.; Wainauski, Harold S.; Arseneaux, Peter J.

    1988-01-01

    High Speed Wind Tunnel testing of the SR-7L Large Scale Advanced Prop-Fan (LAP) is reported. The LAP is a 2.74 meter (9.0 ft) diameter, 8-bladed tractor type rated for 4475 KW (6000 SHP) at 1698 rpm. It was designated and built by Hamilton Standard under contract to the NASA Lewis Research Center. The LAP employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Testing was conducted in the ONERA S1-MA Atmospheric Wind Tunnel in Modane, France. The test objectives were to confirm that the LAP is free from high speed classical flutter, determine the structural and aerodynamic response to angular inflow, measure blade surface pressures (static and dynamic) and evaluate the aerodynamic performance at various blade angles, rotational speeds and Mach numbers. The measured structural and aerodynamic performance of the LAP correlated well with analytical predictions thereby providing confidence in the computer prediction codes used for the design. There were no signs of classical flutter throughout all phases of the test up to and including the 0.84 maximum Mach number achieved. Steady and unsteady blade surface pressures were successfully measured for a wide range of Mach numbers, inflow angles, rotational speeds and blade angles. No barriers were discovered that would prevent proceeding with the PTA (Prop-Fan Test Assessment) Flight Test Program scheduled for early 1987.

  9. Simulation of a combined-cycle engine

    NASA Technical Reports Server (NTRS)

    Vangerpen, Jon

    1991-01-01

    A FORTRAN computer program was developed to simulate the performance of combined-cycle engines. These engines combine features of both gas turbines and reciprocating engines. The computer program can simulate both design point and off-design operation. Widely varying engine configurations can be evaluated for their power, performance, and efficiency as well as the influence of altitude and air speed. Although the program was developed to simulate aircraft engines, it can be used with equal success for stationary and automative applications.

  10. Computations of non-reacting and reacting viscous blunt body flows, volume 2

    NASA Technical Reports Server (NTRS)

    Li, C. P.

    1973-01-01

    Computer programs for calculating the flow distribution in the nose region of a blunt body at arbitrary speed and altitude are discussed. The programs differ from each other in their ability to consider either thin shock or thick shock conditions and in the use of either ideal, equilibrium air, or nonequilibrium air chemistry. The application of the programs to analyzing the flow distribution around the nose of the shuttle orbiter during reentry is reported.

  11. NASA high performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1993-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 100-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientist's abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects as well as summaries of individual research and development programs within each project.

  12. A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing

    NASA Astrophysics Data System (ADS)

    Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai

    2015-11-01

    High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.

  13. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  14. Information Technology as the Paradigm High-Speed Management Support Tool: The Uses of Computer Mediated Communication, Virtual Realism, and Telepresence.

    ERIC Educational Resources Information Center

    Newby, Gregory B.

    Information technologies such as computer mediated communication (CMC), virtual reality, and telepresence can provide the communication flow required by high-speed management techniques that high-technology industries have adopted in response to changes in the climate of competition. Intra-corporate CMC might be used for a variety of purposes…

  15. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  16. Tse computers. [ultrahigh speed optical processing for two dimensional binary image

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III

    1977-01-01

    An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.

  17. Worldwide Report, Arms Control.

    DTIC Science & Technology

    1985-07-30

    reports that a two-million watt laser will soon be tested at the missile testing range of White Sands, New Mexico , in accordance with the Pentagon’s plan...European program dubbed "Eureka." The consortium will develop research in the area of software for high-speed computers, radar, electro-optics and...Excerpts] The PUERTO RICO LIBRE journal, published in New York by the Committee in Solidarity With Puerto Rico, has published an article in which the

  18. Data Acquisition Systems

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Technology developed during a joint research program with Langley and Kinetic Systems Corporation led to Kinetic Systems' production of a high speed Computer Automated Measurement and Control (CAMAC) data acquisition system. The study, which involved the use of CAMAC equipment applied to flight simulation, significantly improved the company's technical capability and produced new applications. With Digital Equipment Corporation, Kinetic Systems is marketing the system to government and private companies for flight simulation, fusion research, turbine testing, steelmaking, etc.

  19. Longitudinal Study of the Programs and the Organization of a Division of the Corps of Engineers.

    DTIC Science & Technology

    1984-05-01

    period to another as well as powerful high speed computers to expedite the analysis. Also, the abundance of completed studies of this type can be...and municipal water supply, irrigation, flood damage prevention, recreation, hydroelectric power generation and conservation of natual resources. The...inputs into outputs, they distribute the outputs, and they provide direct support to the other three functions. Emphasis is placed on the power of

  20. Analyses of track shift under high-speed vehicle-track interaction : safety of high speed ground transportation systems

    DOT National Transportation Integrated Search

    1997-06-01

    This report describes analysis tools to predict shift under high-speed vehicle- : track interaction. The analysis approach is based on two fundamental models : developed (as part of this research); the first model computes the track lateral : residua...

  1. Turbofan noise generation. Volume 2: Computer programs

    NASA Technical Reports Server (NTRS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-01-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  2. Turbofan noise generation. Volume 2: Computer programs

    NASA Astrophysics Data System (ADS)

    Ventres, C. S.; Theobald, M. A.; Mark, W. D.

    1982-07-01

    The use of a package of computer programs developed to calculate the in duct acoustic mods excited by a fan/stator stage operating at subsonic tip speed is described. The following three noise source mechanisms are included: (1) sound generated by the rotor blades interacting with turbulence ingested into, or generated within, the inlet duct; (2) sound generated by the stator vanes interacting with the turbulent wakes of the rotor blades; and (3) sound generated by the stator vanes interacting with the velocity deficits in the mean wakes of the rotor blades. The computations for three different noise mechanisms are coded as three separate computer program packages. The computer codes are described by means of block diagrams, tables of data and variables, and example program executions; FORTRAN listings are included.

  3. Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.

    1992-01-01

    The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.

  4. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  5. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  6. KSC ice/frost/debris assessment for space shuttle mission STS-29R

    NASA Technical Reports Server (NTRS)

    Stevenson, Charles G.; Katnik, Gregory N.; Higginbotham, Scott A.

    1989-01-01

    An ice/frost/debris assessment was conducted for Space Shuttle Mission STS-29R. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions on the external tank are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by an on-pad visual inspection. High speed photography is analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage. The ice/frost/debris conditions of Mission STS-29R and their effect on the Space Shuttle Program are documented.

  7. Ice/frost/debris assessment for space shuttle mission STS-26R

    NASA Technical Reports Server (NTRS)

    Stevenson, Charles G.; Katnik, Gregory N.; Higginbotham, Scott A.

    1988-01-01

    An Ice/Frost/Debris Assessment was conducted for Space Shuttle Mission STS-26R. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/Frost conditions are assessed by use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by an on-pad visual inspection. High speed photography is viewed after launch to identify ice/debris sources and evaluate potential vehicle damage. The Ice/Frost/Debris conditions of Mission 26R and their effect on the Space Shuttle Program is documented.

  8. Ice/frost/debris assessment for space shuttle mission STS-27R, December 2, 1988

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.

    1989-01-01

    An Ice/Frost/Debris assessment was conducted for Space Shuttle Mission STS-27R. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by an on-pad visual inspection. High speed photography is viewed after launch to identify ice/debris sources and evaluate potential vehicle damage. The Ice/Frost/Debris conditions of Mission STS-27R and their effect on the Space Shuttle Program are documented.

  9. MIDAS, prototype Multivariate Interactive Digital Analysis System, phase 1. Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.

    1974-01-01

    The MIDAS System is described as a third-generation fast multispectral recognition system able to keep pace with the large quantity and high rates of data acquisition from present and projected sensors. A principal objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turnaround time and significant gains in throughput. The hardware and software are described. The system contains a mini-computer to control the various high-speed processing elements in the data path, and a classifier which implements an all-digital prototype multivariate-Gaussian maximum likelihood decision algorithm operating at 200,000 pixels/sec. Sufficient hardware was developed to perform signature extraction from computer-compatible tapes, compute classifier coefficients, control the classifier operation, and diagnose operation.

  10. An atlas of monthly mean distributions of GEOSAT sea surface height, SSMI surface wind speed, AVHRR/2 sea surface temperature, and ECMWF surface wind components during 1988

    NASA Technical Reports Server (NTRS)

    Halpern, D.; Zlotnicki, V.; Newman, J.; Brown, O.; Wentz, F.

    1991-01-01

    Monthly mean global distributions for 1988 are presented with a common color scale and geographical map. Distributions are included for sea surface height variation estimated from GEOSAT; surface wind speed estimated from the Special Sensor Microwave Imager on the Defense Meteorological Satellite Program spacecraft; sea surface temperature estimated from the Advanced Very High Resolution Radiometer on NOAA spacecrafts; and the Cartesian components of the 10m height wind vector computed by the European Center for Medium Range Weather Forecasting. Charts of monthly mean value, sampling distribution, and standard deviation value are displayed. Annual mean distributions are displayed.

  11. Arithmetic 400. A Computer Educational Program.

    ERIC Educational Resources Information Center

    Firestein, Laurie

    "ARITHMETIC 400" is the first of the next generation of educational programs designed to encourage thinking about arithmetic problems. Presented in video game format, performance is a measure of correctness, speed, accuracy, and fortune as well. Play presents a challenge to individuals at various skill levels. The program, run on an Apple…

  12. Computing at the speed limit (supercomputers)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhard, R.

    1982-07-01

    The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less

  13. Aerodynamic optimization of aircraft wings using a coupled VLM-2.5D RANS approach

    NASA Astrophysics Data System (ADS)

    Parenteau, Matthieu

    The design process of transonic civil aircraft is complex and requires strong governance to manage the various program development phases. There is a need in the community to have numerical models in all disciplines that span the conceptual, preliminary and detail design phases in a seamless fashion so that choices made in each phase remain consistent with each other. The objective of this work is to develop an aerodynamic model suitable for conceptual multidisciplinary design optimization with low computational cost and sufficient fidelity to explore a large design space in the transonic and high-lift regimes. The physics-based reduce order model is based on the inviscid Vortex Lattice Method (VLM), selected for its low computation time. Viscous effects are modeled with two-dimensional high-fidelity RANS calculations at various sections along the span and incorporated as an angle of attack correction inside the VLM. The viscous sectional data are calculated with infinite swept wing conditions to allow viscous crossflow effects to be included for a more accurate maximum lift coefficient and spanload evaluations. These viscous corrections are coupled through a modified alpha coupling method for 2.5D RANS sectional data, stabilized in the post-stall region with artificial dissipation. The fidelity of the method is verified against 3D RANS flow solver solutions on the Bombardier Research Wing (BRW). Clean and high-lift configurations are investigated. The overall results show impressive precision of the VLM/2.5D RANS approach compared to 3D RANS solutions and in compute times in the order of seconds on a standard desktop computer. Finally, the aerodynamic solver is implemented in an optimization framework with a Covariant Matrix Adaptation Evolution Strategy (CMA-ES) optimizer to explore the design space of aerodynamic wing planform. Single-objective low-speed and high-speed optimizations are performed along with composite-objective functions for combined low-speed and high-speed optimizations with high-lift configurations as well. Moreover, the VLM/2.5D approach is capable of capturing stall cells phenomena and this characteristic is used to define a new spanwise stall criteria to be introduced as an optimization constraint. The work concludes on the limitations of the method and possible avenues for further research. None

  14. Civil propulsion technology for the next twenty-five years

    NASA Technical Reports Server (NTRS)

    Rosen, Robert; Facey, John R.

    1987-01-01

    The next twenty-five years will see major advances in civil propulsion technology that will result in completely new aircraft systems for domestic, international, commuter and high-speed transports. These aircraft will include advanced aerodynamic, structural, and avionic technologies resulting in major new system capabilities and economic improvements. Propulsion technologies will include high-speed turboprops in the near term, very high bypass ratio turbofans, high efficiency small engines and advanced cycles utilizing high temperature materials for high-speed propulsion. Key fundamental enabling technologies include increased temperature capability and advanced design methods. Increased temperature capability will be based on improved composite materials such as metal matrix, intermetallics, ceramics, and carbon/carbon as well as advanced heat transfer techniques. Advanced design methods will make use of advances in internal computational fluid mechanics, reacting flow computation, computational structural mechanics and computational chemistry. The combination of advanced enabling technologies, new propulsion concepts and advanced control approaches will provide major improvements in civil aircraft.

  15. MATLAB implementation of a dynamic clamp with bandwidth >125 KHz capable of generating INa at 37°C

    PubMed Central

    Clausen, Chris; Valiunas, Virginijus; Brink, Peter R.; Cohen, Ira S.

    2012-01-01

    We describe the construction of a dynamic clamp with bandwidth >125 KHz that utilizes a high performance, yet low cost, standard home/office PC interfaced with a high-speed (16 bit) data acquisition module. High bandwidth is achieved by exploiting recently available software advances (code-generation technology, optimized real-time kernel). Dynamic-clamp programs are constructed using Simulink, a visual programming language. Blocks for computation of membrane currents are written in the high-level matlab language; no programming in C is required. The instrument can be used in single- or dual-cell configurations, with the capability to modify programs while experiments are in progress. We describe an algorithm for computing the fast transient Na+ current (INa) in real time, and test its accuracy and stability using rate constants appropriate for 37°C. We then construct a program capable of supplying three currents to a cell preparation: INa, the hyperpolarizing-activated inward pacemaker current (If), and an inward-rectifier K+ current (IK1). The program corrects for the IR drop due to electrode current flow, and also records all voltages and currents. We tested this program on dual patch-clamped HEK293 cells where the dynamic clamp controls a current-clamp amplifier and a voltage-clamp amplifier controls membrane potential, and current-clamped HEK293 cells where the dynamic clamp produces spontaneous pacing behavior exhibiting Na+ spikes in otherwise passive cells. PMID:23224681

  16. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  17. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  18. Facilities | Integrated Energy Solutions | NREL

    Science.gov Websites

    strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed

  19. Computer Program for Analysis of High Speed, Single Row, Angular Contact, Spherical Roller Bearing, SASHBEAN. Volume 2: Mathematical Formulation and Analysis

    DTIC Science & Technology

    1993-09-01

    conformity of roller and raceway spherical crowns, while lending the bearing its self -aligning and high load capacity, also results in relative sliding...thus in the negative direction. A positive axial load, acting along the X-axis is also shown in Figure 1. Due to the self -aligning ability of a...34Pubdic irevmln bude tI thui cftlction of mntorniaton e =-mtmiaed to =,..iage 1 hour per respose . incuding the tme for rewiwng Insruc.tions

  20. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  1. Early MIMD experience on the CRAY X-MP

    NASA Astrophysics Data System (ADS)

    Rhoades, Clifford E.; Stevens, K. G.

    1985-07-01

    This paper describes some early experience with converting four physics simulation programs to the CRAY X-MP, a current Multiple Instruction, Multiple Data (MIMD) computer consisting of two processors each with an architecture similar to that of the CRAY-1. As a multi-processor, the CRAY X-MP together with the high speed Solid-state Storage Device (SSD) in an ideal machine upon which to study MIMD algorithms for solving the equations of mathematical physics because it is fast enough to run real problems. The computer programs used in this study are all FORTRAN versions of original production codes. They range in sophistication from a one-dimensional numerical simulation of collisionless plasma to a two-dimensional hydrodynamics code with heat flow to a couple of three-dimensional fluid dynamics codes with varying degrees of viscous modeling. Early research with a dual processor configuration has shown speed-ups ranging from 1.55 to 1.98. It has been observed that a few simple extensions to FORTRAN allow a typical programmer to achieve a remarkable level of efficiency. These extensions involve the concept of memory local to a concurrent subprogram and memory common to all concurrent subprograms.

  2. High speed acquisition of multiparameter data using a Macintosh IIcx

    NASA Astrophysics Data System (ADS)

    Berno, Anthony; Vogel, John S.; Caffee, Marc

    1991-05-01

    Accelerator mass spectrometry systems based on > 3 MV tandem accelerators often use multianode ionization detectors and/or time-of-flight detectors to identify individual isotopes through multiparameter analysis. A Macintosh IIcx has been programmed to collect AMS data from a CAMAC-implemented analyzer and to display the histogrammed individual parameters and a doubleparameter array. The computer-CAMAC connection is through a NuBus to CAMAC dataway interface which allows direct addressing to all functions and registers in the crate. Asynchronous data from the rare isotope are sorted into a CAMAC memory module by a list sequence controller. Isotope switching is controlled by a one-cycle timing generator. A rate-dependent amount of time is used to transfer the data from the memory module at the end of each timing cycle. The present configuration uses 10-75 ms for rates of 500-10000 cps. Parameter analysis occurs during the rest of the 520 ms data collection cycle. Completed measurements of the isotope concentrations of each sample are written to files which are compatible with standard Macintosh databases or other processing programs. The system is inexpensive and operates at speeds comparable to those obtainable using larger computers.

  3. Marshal Wrubel and the Electronic Computer as an Astronomical Instrument

    NASA Astrophysics Data System (ADS)

    Mutschlecner, J. P.; Olsen, K. H.

    1998-05-01

    In 1960, Marshal H. Wrubel, professor of astrophysics at Indiana University, published an influential review paper under the title, "The Electronic Computer as an Astronomical Instrument." This essay pointed out the enormous potential of the electronic computer as an instrument of observational and theoretical research in astronomy, illustrated programming concepts, and made specific recommendations for the increased use of computers in astronomy. He noted that, with a few scattered exceptions, computer use by the astronomical community had heretofore been "timid and sporadic." This situation was to improve dramatically in the next few years. By the late 1950s, general-purpose, high-speed, "mainframe" computers were just emerging from the experimental, developmental stage, but few were affordable by or available to academic and research institutions not closely associated with large industrial or national defense programs. Yet by 1960 Wrubel had spent a decade actively pioneering and promoting the imaginative application of electronic computation within the astronomical community. Astronomy upper-level undergraduate and graduate students at Indiana were introduced to computing, and Ph.D. candidates who he supervised applied computer techniques to problems in theoretical astrophysics. He wrote an early textbook on programming, taught programming classes, and helped establish and direct the Research Computing Center at Indiana, later named the Wrubel Computing Center in his honor. He and his students created a variety of algorithms and subroutines and exchanged these throughout the astronomical community by distributing the Astronomical Computation News Letter. Nationally as well as internationally, Wrubel actively cooperated with other groups interested in computing applications for theoretical astrophysics, often through his position as secretary of the IAU commission on Stellar Constitution.

  4. Program finds centrifugal compressor operating point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, M.C.M.M.; Rodrigues, P.S.B.

    1990-09-01

    This article presents the Scop program, a computational procedure developed using Fortran 77 language to find the operating point of centrifugal compressors starting from performance curves. Characteristics or performance curves traditionally are employed by manufacturers to inform users about turbocompressor behavior. Usually, these curves have polytropic head, H, and corresponding polytropic efficiency, {eta} plus rotation speed, N, and inlet volumetric flowrate, Q, as parameters. Two families of curves can be identified in this figure. One provides head-flow relationships for several speeds and the other refers to isoefficiency curves.

  5. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  6. 2008 13th Expeditionary Warfare Conference

    DTIC Science & Technology

    2008-10-23

    Ships 6 Joint High Speed Vessel (JHSV) • Program Capability – High speed lift ship capable of transporting cargo and personnel across intra... high - speed aluminum trimaran hullform that enables the ship to reach sustainable speeds of over 40 knots and range in excess of 3,500 nautical miles...advancing concepts for a very high speed , manned submersible,

  7. MIDAS, prototype Multivariate Interactive Digital Analysis System for large area earth resources surveys. Volume 1: System description

    NASA Technical Reports Server (NTRS)

    Christenson, D.; Gordon, M.; Kistler, R.; Kriegler, F.; Lampert, S.; Marshall, R.; Mclaughlin, R.

    1977-01-01

    A third-generation, fast, low cost, multispectral recognition system (MIDAS) able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensots is described. The program can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principle objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in the overall program is described. The system contains a midi-computer to control the various high speed processing elements in the data path, a preprocessor to condition data, and a classifier which implements an all digital prototype multivariate Gaussian maximum likelihood or a Bayesian decision algorithm. Sufficient software was developed to perform signature extraction, control the preprocessor, compute classifier coefficients, control the classifier operation, operate the color display and printer, and diagnose operation.

  8. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 2: Development of theory for wing shielding

    NASA Technical Reports Server (NTRS)

    Amiet, R. K.

    1991-01-01

    A unified theory for aerodynamics and noise of advanced turboprops is presented. The theory and a computer code developed for evaluation at the shielding benefits that might be expected by an aircraft wing in a wing-mounted propeller installation are presented. Several computed directivity patterns are presented to demonstrate the theory. Recently with the advent of the concept of using the wing of an aircraft for noise shielding, the case of diffraction by a surface in a flow has been given attention. The present analysis is based on the case of diffraction of no flow. By combining a Galilean and a Lorentz transform, the wave equation with a mean flow can be reduced to the ordinary equation. Allowance is also made in the analysis for the case of a swept wing. The same combination of Galilean and Lorentz transforms lead to a problem with no flow but a different sweep. The solution procedures for the cases of leading and trailing edges are basically the same. Two normalizations of the solution are given by the computer program. FORTRAN computer programs are presented with detailed documentation. The output from these programs compares favorably with the results of other investigators.

  9. High-Speed Photography with Computer Control.

    ERIC Educational Resources Information Center

    Winters, Loren M.

    1991-01-01

    Describes the use of a microcomputer as an intervalometer for the control and timing of several flash units to photograph high-speed events. Applies this technology to study the oscillations of a stretched rubber band, the deceleration of high-speed projectiles in water, the splashes of milk drops, and the bursts of popcorn kernels. (MDH)

  10. High Performance Computing Meets Energy Efficiency - Continuum Magazine |

    Science.gov Websites

    NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data

  11. GCF Mark IV development

    NASA Technical Reports Server (NTRS)

    Mortensen, L. O.

    1982-01-01

    The Mark IV ground communication facility (GCF) as it is implemented to support the network consolidation program is reviewed. Changes in the GCF are made in the area of increased capacity. Common carrier circuits are the medium for data transfer. The message multiplexing in the Mark IV era differs from the Mark III era, in that all multiplexing is done in a GCF computer under GCF software control, which is similar to the multiplexing currently done in the high speed data subsystem.

  12. High-speed pulse-shape generator, pulse multiplexer

    DOEpatents

    Burkhart, Scott C.

    2002-01-01

    The invention combines arbitrary amplitude high-speed pulses for precision pulse shaping for the National Ignition Facility (NIF). The circuitry combines arbitrary height pulses which are generated by replicating scaled versions of a trigger pulse and summing them delayed in time on a pulse line. The combined electrical pulses are connected to an electro-optic modulator which modulates a laser beam. The circuit can also be adapted to combine multiple channels of high speed data into a single train of electrical pulses which generates the optical pulses for very high speed optical communication. The invention has application in laser pulse shaping for inertial confinement fusion, in optical data links for computers, telecommunications, and in laser pulse shaping for atomic excitation studies. The invention can be used to effect at least a 10.times. increase in all fiber communication lines. It allows a greatly increased data transfer rate between high-performance computers. The invention is inexpensive enough to bring high-speed video and data services to homes through a super modem.

  13. Weight and cost estimating relationships for heavy lift airships

    NASA Technical Reports Server (NTRS)

    Gray, D. W.

    1979-01-01

    Weight and cost estimating relationships, including additional parameters that influence the cost and performance of heavy-lift airships (HLA), are discussed. Inputs to a closed loop computer program, consisting of useful load, forward speed, lift module positive or negative thrust, and rotors and propellers, are examined. Detail is given to the HLA cost and weight program (HLACW), which computes component weights, vehicle size, buoyancy lift, rotor and propellar thrust, and engine horse power. This program solves the problem of interrelating the different aerostat, rotors, engines and propeller sizes. Six sets of 'default parameters' are left for the operator to change during each computer run enabling slight data manipulation without altering the program.

  14. Computational Model for Impact-Resisting Critical Thickness of High-Speed Machine Outer Protective Plate

    NASA Astrophysics Data System (ADS)

    Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei

    2018-05-01

    The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.

  15. Parallelization of interpolation, solar radiation and water flow simulation modules in GRASS GIS using OpenMP

    NASA Astrophysics Data System (ADS)

    Hofierka, Jaroslav; Lacko, Michal; Zubal, Stanislav

    2017-10-01

    In this paper, we describe the parallelization of three complex and computationally intensive modules of GRASS GIS using the OpenMP application programming interface for multi-core computers. These include the v.surf.rst module for spatial interpolation, the r.sun module for solar radiation modeling and the r.sim.water module for water flow simulation. We briefly describe the functionality of the modules and parallelization approaches used in the modules. Our approach includes the analysis of the module's functionality, identification of source code segments suitable for parallelization and proper application of OpenMP parallelization code to create efficient threads processing the subtasks. We document the efficiency of the solutions using the airborne laser scanning data representing land surface in the test area and derived high-resolution digital terrain model grids. We discuss the performance speed-up and parallelization efficiency depending on the number of processor threads. The study showed a substantial increase in computation speeds on a standard multi-core computer while maintaining the accuracy of results in comparison to the output from original modules. The presented parallelization approach showed the simplicity and efficiency of the parallelization of open-source GRASS GIS modules using OpenMP, leading to an increased performance of this geospatial software on standard multi-core computers.

  16. Computer program for definition of transonic axial-flow compressor blade rows

    NASA Technical Reports Server (NTRS)

    Crouse, J. E.

    1975-01-01

    Particular type of blade element used has two segments which have centerlines and surfaces described by constant change of angle with path distance on cone. Program is result of rework of earlier program to give major gains in accuracy, reliability and speed. It also covers more steps of overall compressor design procedure.

  17. A large high vacuum, high pumping speed space simulation chamber for electric propulsion

    NASA Technical Reports Server (NTRS)

    Grisnik, Stanley P.; Parkes, James E.

    1994-01-01

    Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.

  18. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  19. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  20. Bringing MapReduce Closer To Data With Active Drives

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.

    2017-12-01

    Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.

  1. 1999 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Hahne, David E. (Editor)

    1999-01-01

    The High-Speed Research Program sponsored the NASA High-Speed Research Program Aerodynamic Performance Review on February 8-12, 1999 in Anaheim, California. The review was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of: Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization) and High-Lift. The review objectives were to: (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. The HSR AP Technical Review was held simultaneously with the annual review of the following airframe technology areas: Materials and Structures, Environmental Impact, Flight Deck, and Technology Integration Thus, a fourth objective of the Review was to promote synergy between the Aerodynamic Performance technology area and the other technology areas within the airframe element of the HSR Program. This Volume 2/Part 1 publication presents the High-Lift Configuration Development session.

  2. National Special Education Alliance.

    ERIC Educational Resources Information Center

    Pressman, Harvey

    1987-01-01

    The article describes the National Special Education Alliance, a network of parent-led organizations seeking to speed the delivery of computer technology to the disabled. Discussed are program origins, starting a local center, charter members of the alliance, benefits of Alliance membership, and the Alliance's relationship with Apple computer. (DB)

  3. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  4. Design Tools for Evaluating Multiprocessor Programs

    DTIC Science & Technology

    1976-07-01

    than large uniprocessing machines, and 2. economies of scale in manufacturing. Perhaps the most compelling reason (possibly a consequence of the...speed, redundancy, (inefficiency, resource utilization, and economies of the components. [Browne 73, Lehman 66] 6. How can the system be scheduled...mejsures are interesting about the computation? Somn may be: speed, redundancy, (inefficiency, resource utilization, and economies of the components

  5. Experimental and analytical investigations to improve low-speed performance and stability and control characteristics of supersonic cruise fighter vehicles

    NASA Technical Reports Server (NTRS)

    Graham, A. B.

    1977-01-01

    Small- and large-scale models of supersonic cruise fighter vehicles were used to determine the effectiveness of airframe/propulsion integration concepts for improved low-speed performance and stability and control characteristics. Computer programs were used for engine/airframe sizing studies to yield optimum vehicle performance.

  6. Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds

    NASA Technical Reports Server (NTRS)

    Boppe, Charles W.

    1987-01-01

    A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.

  7. 49 CFR 213.305 - Designation of qualified individuals; general qualifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... college level engineering program, supplemented by special on the job training emphasizing the techniques... of high speed track provided by the employer or by a college level engineering program, supplemented... maintenance of high speed track provided by the employer or by a college level engineering program...

  8. 49 CFR 213.305 - Designation of qualified individuals; general qualifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... college level engineering program, supplemented by special on the job training emphasizing the techniques... of high speed track provided by the employer or by a college level engineering program, supplemented... maintenance of high speed track provided by the employer or by a college level engineering program...

  9. 49 CFR 213.305 - Designation of qualified individuals; general qualifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... college level engineering program, supplemented by special on the job training emphasizing the techniques... of high speed track provided by the employer or by a college level engineering program, supplemented... maintenance of high speed track provided by the employer or by a college level engineering program...

  10. Corrigendum to ;Numerical dissipation control in high order shock-capturing schemes for LES of low speed flows; [J. Comput. Phys. 307 (2016) 189-202

    NASA Astrophysics Data System (ADS)

    Kotov, D. V.; Yee, H. C.; Wray, A. A.; Sjögreen, Björn; Kritsuk, A. G.

    2018-01-01

    The authors regret for the typographic errors that were made in equation (4) and missing phrase after equation (4) in the article "Numerical dissipation control in high order shock-capturing schemes for LES of low speed flows" [J. Comput. Phys. 307 (2016) 189-202].

  11. Computational fluid dynamics research

    NASA Technical Reports Server (NTRS)

    Chandra, Suresh; Jones, Kenneth; Hassan, Hassan; Mcrae, David Scott

    1992-01-01

    The focus of research in the computational fluid dynamics (CFD) area is two fold: (1) to develop new approaches for turbulence modeling so that high speed compressible flows can be studied for applications to entry and re-entry flows; and (2) to perform research to improve CFD algorithm accuracy and efficiency for high speed flows. Research activities, faculty and student participation, publications, and financial information are outlined.

  12. Impact of VLSI/VHSIC on satellite on-board signal processing

    NASA Astrophysics Data System (ADS)

    Aanstoos, J. V.; Ruedger, W. H.; Snyder, W. E.; Kelly, W. L.

    Forecasted improvements in IC fabrication techniques, such as the use of X-ray lithography, are expected to yield submicron circuit feature sizes within the decade of the 1980s. As dimensions decrease, reliability, cost, speed, power consumption and density improvements will be realized which have a significant impact on the capabilities of onboard spacecraft signal processing functions. This will in turn result in increases of the intelligence that may be deployed on spaceborne remote sensing platforms. Among programs oriented toward such goals are the silicon-based Very High Speed Integrated Circuit (VHSIC) researches sponsored by the U.S. Department of Defense, and efforts toward the development of GaAs devices which will compete with silicon VLSI technology for future applications. GaAs has an electron mobility which is five to six times that of silicon, and promises commensurate computation speed increases under low field conditions.

  13. DYNALIST II : A Computer Program for Stability and Dynamic Response Analysis of Rail Vehicle Systems : Volume 4. Revised User's Manual.

    DOT National Transportation Integrated Search

    1976-07-01

    The Federal Railroad Administration (FRA) is sponsoring research, development, and demonstration programs to provide improved safety, performance, speed, reliability, and maintainability of rail transportation systems at reduced life-cycle costs. A m...

  14. Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jian; Hamidouche, Khaled; Zheng, Jie

    2015-08-05

    Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less

  15. A computer simulation of the transient response of a 4 cylinder Stirling engine with burner and air preheater in a vehicle

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1981-01-01

    A series of computer programs are presented with full documentation which simulate the transient behavior of a modern 4 cylinder Siemens arrangement Stirling engine with burner and air preheater. Cold start, cranking, idling, acceleration through 3 gear changes and steady speed operation are simulated. Sample results and complete operating instructions are given. A full source code listing of all programs are included.

  16. 78 FR 28940 - Environmental Impact Statement for the Atlanta to Charlotte Portion of the Southeast High Speed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... the Atlanta to Charlotte Portion of the Southeast High Speed Rail Corridor AGENCY: Federal Rail... potential passenger rail improvements between Atlanta, GA and Charlotte, NC, along the Southeast High-Speed... federal High-Speed Intercity Passenger Rail (HSIPR) program and includes the development of a Passenger...

  17. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment.

    PubMed

    Huang, Chien-Feng; Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications.

  18. Control Code for Bearingless Switched-Reluctance Motor

    NASA Technical Reports Server (NTRS)

    Morrison, Carlos R.

    2007-01-01

    A computer program has been devised for controlling a machine that is an integral combination of magnetic bearings and a switched-reluctance motor. The motor contains an eight-pole stator and a hybrid rotor, which has both (1) a circular lamination stack for levitation and (2) a six-pole lamination stack for rotation. The program computes drive and levitation currents for the stator windings with real-time feedback control. During normal operation, two of the four pairs of opposing stator poles (each pair at right angles to the other pair) levitate the rotor. The remaining two pairs of stator poles exert torque on the six-pole rotor lamination stack to produce rotation. This version is executable in a control-loop time of 40 s on a Pentium (or equivalent) processor that operates at a clock speed of 400 MHz. The program can be expanded, by addition of logic blocks, to enable control of position along additional axes. The code enables adjustment of operational parameters (e.g., motor speed and stiffness, and damping parameters of magnetic bearings) through computer keyboard key presses.

  19. Optimisation of multiplet identifier processing on a PLAYSTATION® 3

    NASA Astrophysics Data System (ADS)

    Hattori, Masami; Mizuno, Takashi

    2010-02-01

    To enable high-performance computing (HPC) for applications with large datasets using a Sony® PLAYSTATION® 3 (PS3™) video game console, we configured a hybrid system consisting of a Windows® PC and a PS3™. To validate this system, we implemented the real-time multiplet identifier (RTMI) application, which identifies multiplets of microearthquakes in terms of the similarity of their waveforms. The cross-correlation computation, which is a core algorithm of the RTMI application, was optimised for the PS3™ platform, while the rest of the computation, including data input and output remained on the PC. With this configuration, the core part of the algorithm ran 69 times faster than the original program, accelerating total computation speed more than five times. As a result, the system processed up to 2100 total microseismic events, whereas the original implementation had a limit of 400 events. These results indicate that this system enables high-performance computing for large datasets using the PS3™, as long as data transfer time is negligible compared with computation time.

  20. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 2; High Lift

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag, prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executives summaries for all the Aerodynamic Performance technology areas.

  1. Transient Simulation of Ram Accelerator Flowfields

    DTIC Science & Technology

    1993-01-01

    PROPULSIVE FLOWS WITH COMIBUSTION CHEMISTRY, ADVANCED TURBULENCE MODELS - 1񕚑ŕ. Reference: Dash, "Advanced Computational Models for Analyzing High - Speed...coupled, implicit manner. Near-wall effects have been dealt with via the low Reynolds number formulation of Chien: and the recent model of Rodi.3 High ...July 1989. 19 Dash, S.M., ’Advanced Computational Models for Analyzing High -Speed Propulsive Flowficlds,’ 199M J&ý,NAF Propulsion Meeting, CPIA Pub. 550

  2. Implementation of a High-Speed FPGA and DSP Based FFT Processor for Improving Strain Demodulation Performance in a Fiber-Optic-Based Sensing System

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    NASA's Aviation Safety and Security Program is pursuing research in on-board Structural Health Management (SHM) technologies for purposes of reducing or eliminating aircraft accidents due to system and component failures. Under this program, NASA Langley Research Center (LaRC) is developing a strain-based structural health-monitoring concept that incorporates a fiber optic-based measuring system for acquiring strain values. This fiber optic-based measuring system provides for the distribution of thousands of strain sensors embedded in a network of fiber optic cables. The resolution of strain value at each discrete sensor point requires a computationally demanding data reduction software process that, when hosted on a conventional processor, is not suitable for near real-time measurement. This report describes the development and integration of an alternative computing environment using dedicated computing hardware for performing the data reduction. Performance comparison between the existing and the hardware-based system is presented.

  3. An improved computer program for calculating the theoretical performance parameters of a propeller type wind turbine. An appendix to the final report on feasibility of using wind power to pump irrigation water (Texas). [PROP Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barieau, R.E.

    1977-03-01

    The PROP Program of Wilson and Lissaman has been modified by adding the Newton-Raphson Method and a Step Wise Search Method, as options for the method of solution. In addition, an optimization method is included. Twist angles, tip speed ratio and the pitch angle may be varied to produce maximum power coefficient. The computer program listing is presented along with sample input and output data. Further improvements to the program are discussed.

  4. Method for upgrading the performance at track transitions for high-speed service : next generation high-speed rail program

    DOT National Transportation Integrated Search

    2001-09-01

    High-speed trains in the speed range of 100 to 160 mph require tracks of nearly perfect geometry and mechanical uniformity, when subjected to moving wheel loads. Therefore, this report briefly describes the remedies being used by various railroads to...

  5. NASA High Performance Computing and Communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1994-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.

  6. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  7. Analytical method for predicting the pressure distribution about a nacelle at transonic speeds

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Merkle, C. L.; Heck, P. H.; Lahti, D. J.

    1973-01-01

    The formulation and development of a computer analysis for the calculation of streamlines and pressure distributions around two-dimensional (planar and axisymmetric) isolated nacelles at transonic speeds are described. The computerized flow field analysis is designed to predict the transonic flow around long and short high-bypass-ratio fan duct nacelles with inlet flows and with exhaust flows having appropriate aerothermodynamic properties. The flow field boundaries are located as far upstream and downstream as necessary to obtain minimum disturbances at the boundary. The far-field lateral flow field boundary is analytically defined to exactly represent free-flight conditions or solid wind tunnel wall effects. The inviscid solution technique is based on a Streamtube Curvature Analysis. The computer program utilizes an automatic grid refinement procedure and solves the flow field equations with a matrix relaxation technique. The boundary layer displacement effects and the onset of turbulent separation are included, based on the compressible turbulent boundary layer solution method of Stratford and Beavers and on the turbulent separation prediction method of Stratford.

  8. KSC ice/frost/debris assessment for Space Shuttle Mission STS-30R

    NASA Technical Reports Server (NTRS)

    Stevenson, Charles G.; Katnik, Gregory N.; Higginbotham, Scott A.

    1989-01-01

    An ice/frost/debris assessment was conducted for Space Shuttle Mission STS-30R. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions on the external tank are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by an on-pad visual inspection. High speed photography is analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage. The ice/frost/debris conditions of Mission STS-30R and their overall effect on the Space Shuttle Program is documented.

  9. Debris/Ice/TPS Assessment and Photographic Analysis for Shuttle Mission STS-39

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1991-01-01

    A Debris/Ice/TPS (thermal protection system) assessment and photographic analysis was conducted for Space Shuttle Mission STS-39. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the external tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography of launch was analyzed to identify ice/debris anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-39, and their overall effect on the Space Shuttle Program are documented.

  10. Ways of achieving continuous service from computers

    NASA Technical Reports Server (NTRS)

    Quinn, M. J., Jr.

    1974-01-01

    This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.

  11. Solution of a large hydrodynamic problem using the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Howser, L. M.

    1976-01-01

    A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.

  12. Comsat Antenna

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The antenna shown is the new, multiple-beam, Unattended Earth Terminal, located at COMSAT Laboratories in Clarksburg, Maryland. Seemingly simple, it is actually a complex structure capable of maintaining contact with several satellites simultaneously (conventional Earth station antennas communicate with only one satellite at a time). In developing the antenna, COMSAT Laboratories used NASTRAN, NASA's structural analysis computer program, together with BANDIT, a companion program. The computer programs were used to model several structural configurations and determine the most suitable, The speed and accuracy of the computerized design analysis afforded appreciable savings in time and money.

  13. SPIP: A computer program implementing the Interaction Picture method for simulation of light-wave propagation in optical fibre

    NASA Astrophysics Data System (ADS)

    Balac, Stéphane; Fernandez, Arnaud

    2016-02-01

    The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.

  14. F-16XL-2 Supersonic Laminar Flow Control Flight Test Experiment

    NASA Technical Reports Server (NTRS)

    Anders, Scott G.; Fischer, Michael C.

    1999-01-01

    The F-16XL-2 Supersonic Laminar Flow Control Flight Test Experiment was part of the NASA High-Speed Research Program. The goal of the experiment was to demonstrate extensive laminar flow, to validate computational fluid dynamics (CFD) codes and design methodology, and to establish laminar flow control design criteria. Topics include the flight test hardware and design, airplane modification, the pressure and suction distributions achieved, the laminar flow achieved, and the data analysis and code correlation.

  15. GASP- General Aviation Synthesis Program. Volume 3: Aerodynamics

    NASA Technical Reports Server (NTRS)

    Hague, D.

    1978-01-01

    Aerodynamics calculations are treated in routines which concern moments as they vary with flight conditions and attitude. The subroutines discussed: (1) compute component equivalent flat plate and wetted areas and profile drag; (2) print and plot low and high speed drag polars; (3) determine life coefficient or angle of attack; (4) determine drag coefficient; (5) determine maximum lift coefficient and drag increment for various flap types and flap settings; and (6) determine required lift coefficient and drag coefficient in cruise flight.

  16. Scientific Visualization in High Speed Network Environments

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kutler, Paul (Technical Monitor)

    1997-01-01

    In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.

  17. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 1; Configuration Aerodynamics

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in area of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodyamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executive summaries for all the Aerodynamic Performance technology areas.

  18. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 1; Configuration Aerodynamics

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in areas of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executive summaries for all the Aerodynamic Performance technology areas.

  19. 1997 NASA High-Speed Research Program Aerodynamic Performance Workshop. Volume 1; Configuration Aerodynamics

    NASA Technical Reports Server (NTRS)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Aerodynamic Performance Workshop on February 25-28, 1997. The workshop was designed to bring together NASA and industry High-Speed Civil Transport (HSCT) Aerodynamic Performance technology development participants in area of Configuration Aerodynamics (transonic and supersonic cruise drag prediction and minimization), High-Lift, Flight Controls, Supersonic Laminar Flow Control, and Sonic Boom Prediction. The workshop objectives were to (1) report the progress and status of HSCT aerodynamic performance technology development; (2) disseminate this technology within the appropriate technical communities; and (3) promote synergy among the scientist and engineers working HSCT aerodynamics. In particular, single- and multi-point optimized HSCT configurations, HSCT high-lift system performance predictions, and HSCT Motion Simulator results were presented along with executive summaries for all the Aerodynamic Performance technology areas.

  20. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  1. AGARD Index of Publications 1983-1985

    DTIC Science & Technology

    1987-06-01

    a high performance high speed General Aviation propeller the advent of the highly loaded program...distribution data at high speed and CLmax data at low speed are NS3-3036# Saab-.;cania, Linkoping (Sweden). described. A flight wing pressure survey which...also well with predictions based on wind tunnel data. flight at high speed and wind tunnel measurements on a half Reynolds Number and transition

  2. South Carolina southeast high speed rail corridor improvement study

    DOT National Transportation Integrated Search

    2001-02-01

    The Southeast Rail Corridor was originally designated as a high-speed corridor in Section 1010 of the Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991. More specifically, it involved the high-speed grade-crossing improvement program o...

  3. Evaluation of non-freeway rumble strips - phase II.

    DOT National Transportation Integrated Search

    2015-03-01

    MDOTs rumble strip program for two-lane high speed rural highways was initiated in 2008 and : continued through 2010. This program included implementation of centerline rumble strips (CLRS) : on nearly 5,400 miles of two-lane high speed roads that...

  4. Materials Test Program, Contact Power Collection for High Speed Tracked Vehicles

    DOT National Transportation Integrated Search

    1971-01-01

    A test program is defined for determining the failure modes and wear characteristics for brushes used to collect electrical power from the wayside for high speed tracked vehicles. Simulation of running conditions and the necessary instrumentation for...

  5. Experimental transonic steady state and unsteady pressure measurements on a supercritical wing during flutter and forced discrete frequency oscillations

    NASA Technical Reports Server (NTRS)

    Piette, Douglas S.; Cazier, Frank W., Jr.

    1989-01-01

    Present flutter analysis methods do not accurately predict the flutter speeds in the transonic flow region for wings with supercritical airfoils. Aerodynamic programs using computational fluid dynamic (CFD) methods are being developed, but these programs need to be verified before they can be used with confidence. A wind tunnel test was performed to obtain all types of data necessary for correlating with CFD programs to validate them for use on high aspect ratio wings. The data include steady state and unsteady aerodynamic measurements on a nominal stiffness wing and a wing four times that stiffness. There is data during forced oscillations and during flutter at several angles of attack, Mach numbers, and tunnel densities.

  6. IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.

    1993-01-01

    Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.

  7. The Acoustic Model Evaluation Committee (AMEC) Reports. Volume 3. Evaluation of the RAYMODE X Propagation Loss Model. Book 1

    DTIC Science & Technology

    1982-09-01

    and run on single sound speed profile. This model the UNIVAC 1108 computer. Other RAYMODE is in exteasive fleet usage, supporting versions were not...sought for significant disparities. (U) In addition to a sound speed versus depth or temperature versus depth plus a (U) Taken together, the two accuracy...as- constant salinity value, the program can sessment techniques, the Difference and access historical sound speed data FOM techniques, lead to

  8. Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.

    PubMed

    Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi

    2017-03-01

    Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.

  9. Design of a high-speed digital processing element for parallel simulation

    NASA Technical Reports Server (NTRS)

    Milner, E. J.; Cwynar, D. S.

    1983-01-01

    A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.

  10. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  11. Program to compute the positions of the aircraft and of the aircraft sensor footprints

    NASA Technical Reports Server (NTRS)

    Paris, J. F. (Principal Investigator)

    1982-01-01

    The positions of the ground track of the aircraft and of the aircraft sensor footprints, in particular the metric camera and the radar scatterometer on the C-130 aircraft, are estimated by a program called ACTRK. The program uses the altitude, speed, and attitude informaton contained in the radar scatterometer data files to calculate the positions. The ACTRK program is documented.

  12. Reference H Piloted Assessment (LaRC.1) Pilot Briefing Guide

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Raney, David L.; Hahne, David E.; Derry, Stephen D.; Glaab, Louis J.

    1999-01-01

    This document describes the purpose of and method by which an assessment of the Boeing Reference H High-Speed Civil Transport design was evaluated in the NASA Langley Research Center's Visual/Motion Simulator in January 1997. Six pilots were invited to perform approximately 60 different Mission Task Elements that represent most normal and emergency flight operations of concern to the High Speed Research program. The Reference H design represents a candidate configuration for a High-Speed Civil Transport, a second generation supersonic civilian transport aircraft. The High-Speed Civil Transport is intended to be economically sound and environmentally safe while carrying passengers and cargo at supersonic speeds with a trans-Pacific range. This simulation study was designated "LaRC. 1" for the purposes of planning, scheduling and reporting within the Guidance and Flight Controls super-element of the High-Speed Research program. The study was based upon Cycle 3 release of the Reference H simulation model.

  13. Recent Advances and Issues in Computers. Oryx Frontiers of Science Series.

    ERIC Educational Resources Information Center

    Gay, Martin K.

    Discussing recent issues in computer science, this book contains 11 chapters covering: (1) developments that have the potential for changing the way computers operate, including microprocessors, mass storage systems, and computing environments; (2) the national computational grid for high-bandwidth, high-speed collaboration among scientists, and…

  14. Teaching Heat Exchanger Network Synthesis Using Interactive Microcomputer Graphics.

    ERIC Educational Resources Information Center

    Dixon, Anthony G.

    1987-01-01

    Describes the Heat Exchanger Network Synthesis (HENS) program used at Worcester Polytechnic Institute (Massachusetts) as an aid to teaching the energy integration step in process design. Focuses on the benefits of the computer graphics used in the program to increase the speed of generating and changing networks. (TW)

  15. Computational methods in the prediction of advanced subsonic and supersonic propeller induced noise: ASSPIN users' manual

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tarkenton, G. M.

    1992-01-01

    This document describes the computational aspects of propeller noise prediction in the time domain and the use of high speed propeller noise prediction program ASSPIN (Advanced Subsonic and Supersonic Propeller Induced Noise). These formulations are valid in both the near and far fields. Two formulations are utilized by ASSPIN: (1) one is used for subsonic portions of the propeller blade; and (2) the second is used for transonic and supersonic regions on the blade. Switching between the two formulations is done automatically. ASSPIN incorporates advanced blade geometry and surface pressure modelling, adaptive observer time grid strategies, and contains enhanced numerical algorithms that result in reduced computational time. In addition, the ability to treat the nonaxial inflow case has been included.

  16. Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development

    NASA Technical Reports Server (NTRS)

    Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.

    1992-01-01

    The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.

  17. High-speed true random number generation based on paired memristors for security electronics

    NASA Astrophysics Data System (ADS)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  18. High-speed true random number generation based on paired memristors for security electronics.

    PubMed

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-10

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ∼30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  19. High-Speed Jet Noise Reduction NASA Perspective

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.; Handy, J. (Technical Monitor)

    2001-01-01

    History shows that the problem of high-speed jet noise reduction is difficult to solve. the good news is that high performance military aircraft noise is dominated by a single source called 'jet noise' (commercial aircraft have several sources). The bad news is that this source has been the subject of research for the past 50 years and progress has been incremental. Major jet noise reduction has been achieved through changing the cycle of the engine to reduce the jet exit velocity. Smaller reductions have been achieved using suppression devices like mixing enhancement and acoustic liners. Significant jet noise reduction without any performance loss is probably not possible! Recent NASA Noise Reduction Research Programs include the High Speed Research Program, Advanced Subsonic Technology Noise Reduction Program, Aerospace Propulsion and Power Program - Fundamental Noise, and Quiet Aircraft Technology Program.

  20. Exploring Cells from the Inside out: New Tools for the Classroom

    ERIC Educational Resources Information Center

    Minogue, James; Jones, Gail; Broadwell, Bethany; Oppewal, Tom

    2006-01-01

    After the first observation of life under the microscope, it took two centuries of research before the "cell theory" was established. Luckily, today's teachers can take advantage of computer technology and speed up the discovery process in their classrooms. This article describes how computer-based instructional programs can be used to engage…

  1. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  2. In situ flash x-ray high-speed computed tomography for the quantitative analysis of highly dynamic processes

    NASA Astrophysics Data System (ADS)

    Moser, Stefan; Nau, Siegfried; Salk, Manfred; Thoma, Klaus

    2014-02-01

    The in situ investigation of dynamic events, ranging from car crash to ballistics, often is key to the understanding of dynamic material behavior. In many cases the important processes and interactions happen on the scale of milli- to microseconds at speeds of 1000 m s-1 or more. Often, 3D information is necessary to fully capture and analyze all relevant effects. High-speed 3D-visualization techniques are thus required for the in situ analysis. 3D-capable optical high-speed methods often are impaired by luminous effects and dust, while flash x-ray based methods usually deliver only 2D data. In this paper, a novel 3D-capable flash x-ray based method, in situ flash x-ray high-speed computed tomography is presented. The method is capable of producing 3D reconstructions of high-speed processes based on an undersampled dataset consisting of only a few (typically 3 to 6) x-ray projections. The major challenges are identified, discussed and the chosen solution outlined. The application is illustrated with an exemplary application of a 1000 m s-1 high-speed impact event on the scale of microseconds. A quantitative analysis of the in situ measurement of the material fragments with a 3D reconstruction with 1 mm voxel size is presented and the results are discussed. The results show that the HSCT method allows gaining valuable visual and quantitative mechanical information for the understanding and interpretation of high-speed events.

  3. Hera - The HEASARC's New Data Analysis Service

    NASA Technical Reports Server (NTRS)

    Pence, William

    2006-01-01

    Hera is the new computer service provided by the HEASARC at the NASA Goddard Space Flight Center that enables qualified student and professional astronomical researchers to immediately begin analyzing scientific data from high-energy astrophysics missions. All the necessary resources needed to do the data analysis are freely provided by Hera, including: * the latest version of the hundreds of scientific analysis programs in the HEASARC's HEASOFT package, as well as most of the programs in the Chandra CIAO package and the XMM-Newton SAS package. * high speed access to the terabytes of data in the HEASARC's high energy astrophysics Browse data archive. * a cluster of fast Linw workstations to run the software * ample local disk space to temporarily store the data and results. Some of the many features and different modes of using Hera are illustrated in this poster presentation.

  4. Comparison of Communication Architectures and Network Topologies for Distributed Propulsion Controls (Preprint)

    DTIC Science & Technology

    2013-05-01

    logic to perform control function computations and are connected to the full authority digital engine control ( FADEC ) via a high-speed data...Digital Engine Control ( FADEC ) via a high speed data communication bus. The short term distributed engine control configu- rations will be core...concen- trator; and high temperature electronics, high speed communication bus between the data concentrator and the control law processor master FADEC

  5. Visualization Techniques in Space and Atmospheric Sciences

    NASA Technical Reports Server (NTRS)

    Szuszczewicz, E. P. (Editor); Bredekamp, Joseph H. (Editor)

    1995-01-01

    Unprecedented volumes of data will be generated by research programs that investigate the Earth as a system and the origin of the universe, which will in turn require analysis and interpretation that will lead to meaningful scientific insight. Providing a widely distributed research community with the ability to access, manipulate, analyze, and visualize these complex, multidimensional data sets depends on a wide range of computer science and technology topics. Data storage and compression, data base management, computational methods and algorithms, artificial intelligence, telecommunications, and high-resolution display are just a few of the topics addressed. A unifying theme throughout the papers with regards to advanced data handling and visualization is the need for interactivity, speed, user-friendliness, and extensibility.

  6. Langley's Computational Efforts in Sonic-Boom Softening of the Boeing HSCT

    NASA Technical Reports Server (NTRS)

    Fouladi, Kamran

    1999-01-01

    NASA Langley's computational efforts in the sonic-boom softening of the Boeing high-speed civil transport are discussed in this paper. In these efforts, an optimization process using a higher order Euler method for analysis was employed to reduce the sonic boom of a baseline configuration through fuselage camber and wing dihedral modifications. Fuselage modifications did not provide any improvements, but the dihedral modifications were shown to be an important tool for the softening process. The study also included aerodynamic and sonic-boom analyses of the baseline and some of the proposed "softened" configurations. Comparisons of two Euler methodologies and two propagation programs for sonic-boom predictions are also discussed in the present paper.

  7. GPU-Accelerated Large-Scale Electronic Structure Theory on Titan with a First-Principles All-Electron Code

    NASA Astrophysics Data System (ADS)

    Huhn, William Paul; Lange, Björn; Yu, Victor; Blum, Volker; Lee, Seyong; Yoon, Mina

    Density-functional theory has been well established as the dominant quantum-mechanical computational method in the materials community. Large accurate simulations become very challenging on small to mid-scale computers and require high-performance compute platforms to succeed. GPU acceleration is one promising approach. In this talk, we present a first implementation of all-electron density-functional theory in the FHI-aims code for massively parallel GPU-based platforms. Special attention is paid to the update of the density and to the integration of the Hamiltonian and overlap matrices, realized in a domain decomposition scheme on non-uniform grids. The initial implementation scales well across nodes on ORNL's Titan Cray XK7 supercomputer (8 to 64 nodes, 16 MPI ranks/node) and shows an overall speed up in runtime due to utilization of the K20X Tesla GPUs on each Titan node of 1.4x, with the charge density update showing a speed up of 2x. Further acceleration opportunities will be discussed. Work supported by the LDRD Program of ORNL managed by UT-Battle, LLC, for the U.S. DOE and by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  8. A Computational Approach for Probabilistic Analysis of LS-DYNA Water Impact Simulations

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Mason, Brian H.; Lyle, Karen H.

    2010-01-01

    NASA s development of new concepts for the Crew Exploration Vehicle Orion presents many similar challenges to those worked in the sixties during the Apollo program. However, with improved modeling capabilities, new challenges arise. For example, the use of the commercial code LS-DYNA, although widely used and accepted in the technical community, often involves high-dimensional, time consuming, and computationally intensive simulations. Because of the computational cost, these tools are often used to evaluate specific conditions and rarely used for statistical analysis. The challenge is to capture what is learned from a limited number of LS-DYNA simulations to develop models that allow users to conduct interpolation of solutions at a fraction of the computational time. For this problem, response surface models are used to predict the system time responses to a water landing as a function of capsule speed, direction, attitude, water speed, and water direction. Furthermore, these models can also be used to ascertain the adequacy of the design in terms of probability measures. This paper presents a description of the LS-DYNA model, a brief summary of the response surface techniques, the analysis of variance approach used in the sensitivity studies, equations used to estimate impact parameters, results showing conditions that might cause injuries, and concluding remarks.

  9. A novel approach to multiple sequence alignment using hadoop data grids.

    PubMed

    Sudha Sadasivam, G; Baktavatchalam, G

    2010-01-01

    Multiple alignment of protein sequences helps to determine evolutionary linkage and to predict molecular structures. The factors to be considered while aligning multiple sequences are speed and accuracy of alignment. Although dynamic programming algorithms produce accurate alignments, they are computation intensive. In this paper we propose a time efficient approach to sequence alignment that also produces quality alignment. The dynamic nature of the algorithm coupled with data and computational parallelism of hadoop data grids improves the accuracy and speed of sequence alignment. The principle of block splitting in hadoop coupled with its scalability facilitates alignment of very large sequences.

  10. Flutter analysis of swept-wing subsonic aircraft with parameter studies of composite wings

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Stein, M.

    1974-01-01

    A computer program is presented for the flutter analysis, including the effects of rigid-body roll, pitch, and plunge of swept-wing subsonic aircraft with a flexible fuselage and engines mounted on flexible pylons. The program utilizes a direct flutter solution in which the flutter determinant is derived by using finite differences, and the root locus branches of the determinant are searched for the lowest flutter speed. In addition, a preprocessing subroutine is included which evaluates the variable bending and twisting stiffness properties of the wing by using a laminated, balanced ply, filamentary composite plate theory. The program has been substantiated by comparisons with existing flutter solutions. The program has been applied to parameter studies which examine the effect of filament orientation upon the flutter behavior of wings belonging to the following three classes: wings having different angles of sweep, wings having different mass ratios, and wings having variable skin thicknesses. These studies demonstrated that the program can perform a complete parameter study in one computer run. The program is designed to detect abrupt changes in the lowest flutter speed and mode shape as the parameters are varied.

  11. Numerical solution of differential equations by artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1995-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks (ANN's) are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed by the author to mate the adaptability of the ANN with the speed and precision of the digital computer. This method has been successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  12. Modification of the Douglas Neumann program to improve the efficiency of predicting component interference and high lift characteristics

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Grose, G. G.

    1978-01-01

    The Douglas Neumann method for low-speed potential flow on arbitrary three-dimensional lifting bodies was modified by substituting the combined source and doublet surface paneling based on Green's identity for the original source panels. Numerical studies show improved accuracy and stability for thin lifting surfaces, permitting reduced panel number for high-lift devices and supercritical airfoil sections. The accuracy of flow in concave corners is improved. A method of airfoil section design for a given pressure distribution, based on Green's identity, was demonstrated. The program uses panels on the body surface with constant source strength and parabolic distribution of doublet strength, and a doublet sheet on the wake. The program is written for the CDC CYBER 175 computer. Results of calculations are presented for isolated bodies, wings, wing-body combinations, and internal flow.

  13. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  14. The science of computing - Parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.

  15. User's guide to PANCOR: A panel method program for interference assessment in slotted-wall wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1990-01-01

    Guidelines are presented for use of the computer program PANCOR to assess the interference due to tunnel walls and model support in a slotted wind tunnel test section at subsonic speeds. Input data requirements are described in detail and program output and general program usage are described. The program is written for effective automatic vectorization on a CDC CYBER 200 class vector processing system.

  16. Department of Physics' Involvement of the Impact Testing Project of the High Speed Civil Transport Program (HSCT)

    NASA Technical Reports Server (NTRS)

    VonMeerwall, Ernst D.

    1994-01-01

    The project involved the impact testing of a kevlar-like woven polymer material, PBO. The purpose was to determine whether this material showed any promise as a lightweight replacement material for jet engine fan containment. The currently used metal fan containment designs carry a high drag penalty due to their weight. Projectiles were fired at samples of PBO by means of a 0.5 inch diameter Helium powered gun. The Initial plan was to encase the samples inside a purpose-built steel "hot box" for heating and ricochet containment. The research associate's responsibility was to develop the data acquisition programs and techniques necessary to determine accurately the impacting projectile's velocity. Beyond this, the Research Associate's duties include any physical computations, experimental design, and data analysis necessary.

  17. Critical Life Prediction Research on Boron-Enhanced Ti-6A1-4V

    DTIC Science & Technology

    2007-05-01

    2.4 High Modulus Ti-6Al-2Fe-0.1Si-0.6B “T” Extrusions for HSCT program. Ref. Government Contract No. NASI -20220: High Speed Research-Airframe...0.1Si-0.6B “T” Extrusions for HSCT program. Ref. Government Contract No. NASI -20220: High Speed Research-Airframe Technology report. with the focus...baseline Ti-6Al-4V (Government Contract No. NASI -20220, High Speed Research-Airframe Technology Report, 1997: 3-5, 14, 26). 1 in 3 in 16 In

  18. Stability of mixing layers

    NASA Technical Reports Server (NTRS)

    Tam, Christopher; Krothapalli, A

    1993-01-01

    The research program for the first year of this project (see the original research proposal) consists of developing an explicit marching scheme for solving the parabolized stability equations (PSE). Performing mathematical analysis of the computational algorithm including numerical stability analysis and the determination of the proper boundary conditions needed at the boundary of the computation domain are implicit in the task. Before one can solve the parabolized stability equations for high-speed mixing layers, the mean flow must first be found. In the past, instability analysis of high-speed mixing layer has mostly been performed on mean flow profiles calculated by the boundary layer equations. In carrying out this project, it is believed that the boundary layer equations might not give an accurate enough nonparallel, nonlinear mean flow needed for parabolized stability analysis. A more accurate mean flow can, however, be found by solving the parabolized Navier-Stokes equations. The advantage of the parabolized Navier-Stokes equations is that its accuracy is consistent with the PSE method. Furthermore, the method of solution is similar. Hence, the major part of the effort of the work of this year has been devoted to the development of an explicit numerical marching scheme for the solution of the Parabolized Navier-Stokes equation as applied to the high-seed mixing layer problem.

  19. An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment

    PubMed Central

    Li, Hsu-Chih

    2017-01-01

    The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications. PMID:28316618

  20. Deriving quantitative dynamics information for proteins and RNAs using ROTDIF with a graphical user interface.

    PubMed

    Berlin, Konstantin; Longhini, Andrew; Dayie, T Kwaku; Fushman, David

    2013-12-01

    To facilitate rigorous analysis of molecular motions in proteins, DNA, and RNA, we present a new version of ROTDIF, a program for determining the overall rotational diffusion tensor from single- or multiple-field nuclear magnetic resonance relaxation data. We introduce four major features that expand the program's versatility and usability. The first feature is the ability to analyze, separately or together, (13)C and/or (15)N relaxation data collected at a single or multiple fields. A significant improvement in the accuracy compared to direct analysis of R2/R1 ratios, especially critical for analysis of (13)C relaxation data, is achieved by subtracting high-frequency contributions to relaxation rates. The second new feature is an improved method for computing the rotational diffusion tensor in the presence of biased errors, such as large conformational exchange contributions, that significantly enhances the accuracy of the computation. The third new feature is the integration of the domain alignment and docking module for relaxation-based structure determination of multi-domain systems. Finally, to improve accessibility to all the program features, we introduced a graphical user interface that simplifies and speeds up the analysis of the data. Written in Java, the new ROTDIF can run on virtually any computer platform. In addition, the new ROTDIF achieves an order of magnitude speedup over the previous version by implementing a more efficient deterministic minimization algorithm. We not only demonstrate the improvement in accuracy and speed of the new algorithm for synthetic and experimental (13)C and (15)N relaxation data for several proteins and nucleic acids, but also show that careful analysis required especially for characterizing RNA dynamics allowed us to uncover subtle conformational changes in RNA as a function of temperature that were opaque to previous analysis.

  1. STAYLAM: A FORTRAN program for the suction transition analysis of a yawed wing laminar boundary layer

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1977-01-01

    A computer program called STAYLAM is presented for the computation of the compressible laminar boundary-layer flow over a yawed infinite wing including distributed suction. This program is restricted to the transonic speed range or less due to the approximate treatment of the compressibility effects. The prescribed suction distribution is permitted to change discontinuously along the chord measured perpendicular to the wing leading edge. Estimates of transition are made by considering leading edge contamination, cross flow instability, and instability of the Tollmien-Schlichting type. A program listing is given in addition to user instructions and a sample case.

  2. Computational fluid dynamics study of the variable-pitch split-blade fan concept

    NASA Technical Reports Server (NTRS)

    Kepler, C. E.; Elmquist, A. R.; Davis, R. L.

    1992-01-01

    A computational fluid dynamics study was conducted to evaluate the feasibility of the variable-pitch split-blade supersonic fan concept. This fan configuration was conceived as a means to enable a supersonic fan to switch from the supersonic through-flow type of operation at high speeds to a conventional fan with subsonic inflow and outflow at low speeds. During this off-design, low-speed mode of operation, the fan would operate with a substantial static pressure rise across the blade row like a conventional transonic fan; the front (variable-pitch) blade would be aligned with the incoming flow, and the aft blade would remain fixed in the position set by the supersonic design conditions. Because of these geometrical features, this low speed configuration would inherently have a large amount of turning and, thereby, would have the potential for a large total pressure increase in a single stage. Such a high-turning blade configuration is prone to flow separation; it was hoped that the channeling of the flow between the blades would act like a slotted wing and help alleviate this problem. A total of 20 blade configurations representing various supersonic and transonic configurations were evaluated using a Navier Stokes CFD program called ADAPTNS because of its adaptive grid features. The flow fields generated by this computational procedure were processed by another data reduction program which calculated average flow properties and simulated fan performance. These results were employed to make quantitative comparisons and evaluations of blade performance. The supersonic split-blade configurations generated performance comparable to a single-blade supersonic, through-flow fan configuration. Simulated rotor total pressure ratios of the order of 2.5 or better were achieved for Mach 2.0 inflow conditions. The corresponding fan efficiencies were approximately 75 percent or better. The transonic split-blade configurations having large amounts of turning were able to generate large amounts of total turning and achieve simulated total pressure ratios of 3.0 or better with subsonic inflow conditions. These configurations had large losses and low fan efficiencies in the 70's percent. They had large separated regions and low velocity wakes. Additional turning and diffusion of this flow in a subsequent stator row would probably be very inefficient. The high total pressure ratios indicated by the rotor performance would be substantially reduced by the stators, and the stage efficiency would be substantially lower. Such performance leaves this dual-mode fan concept less attractive than originally postulated.

  3. User's guide for NASCRIN: A vectorized code for calculating two-dimensional supersonic internal flow fields

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1984-01-01

    A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.

  4. First Annual High-Speed Research Workshop, part 1

    NASA Technical Reports Server (NTRS)

    Whitehead, Allen H., Jr. (Compiler)

    1992-01-01

    The workshop was presented to provide a national forum for the government, industry, and university participants in the program to present and discuss important technology issues related to the development of a commercially viable, environmentally compatible U.S. High Speed Civil Transport. The workshop sessions were organized around the major task elements in NASA's Phase 1 High Speed Research Program which basically addressed the environmental issues of atmospheric emissions, community noise, and sonic boom. This volume is divided into three sessions entitled: Plenary Session (which gives overviews from NASA, Boeing, Douglas, GE, and Pratt & Whitney on the HSCT program); Airframe Systems Studies; and Atmospheric Effects.

  5. Process Simulation of Gas Metal Arc Welding Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Paul E.

    2005-09-06

    ARCWELDER is a Windows-based application that simulates gas metal arc welding (GMAW) of steel and aluminum. The software simulates the welding process in an accurate and efficient manner, provides menu items for process parameter selection, and includes a graphical user interface with the option to animate the process. The user enters the base and electrode material, open circuit voltage, wire diameter, wire feed speed, welding speed, and standoff distance. The program computes the size and shape of a square-groove or V-groove weld in the flat position. The program also computes the current, arc voltage, arc length, electrode extension, transfer ofmore » droplets, heat input, filler metal deposition, base metal dilution, and centerline cooling rate, in English or SI units. The simulation may be used to select welding parameters that lead to desired operation conditions.« less

  6. Common data buffer

    NASA Technical Reports Server (NTRS)

    Byrne, F.

    1981-01-01

    Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.

  7. 2007 Expeditionary Warfare Conference (12th)

    DTIC Science & Technology

    2007-10-25

    Ships 10 Joint High Speed Vessel (JHSV) Today • Program Capability – High speed lift ship capable of transporting cargo and personnel across...develop technologies that will: – Improve the capability to transfer cargo between Sea Base platforms – Provide for high speed / heavy lift...state actors for legitimacy and influence over the relevant population” Joint High Speed Vessel In-Service Amphibs LCAC & Ship to Shore

  8. High Speed Research Program Sonic Fatigue

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A. (Technical Monitor); Beier, Theodor H.; Heaton, Paul

    2005-01-01

    The objective of this sonic fatigue summary is to provide major findings and technical results of studies, initiated in 1994, to assess sonic fatigue behavior of structure that is being considered for the High Speed Civil Transport (HSCT). High Speed Research (HSR) program objectives in the area of sonic fatigue were to predict inlet, exhaust and boundary layer acoustic loads; measure high cycle fatigue data for materials developed during the HSR program; develop advanced sonic fatigue calculation methods to reduce required conservatism in airframe designs; develop damping techniques for sonic fatigue reduction where weight effective; develop wing and fuselage sonic fatigue design requirements; and perform sonic fatigue analyses on HSCT structural concepts to provide guidance to design teams. All goals were partially achieved, but none were completed due to the premature conclusion of the HSR program. A summary of major program findings and recommendations for continued effort are included in the report.

  9. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  10. Traffic routing in a switched regenerative satellite. Volume 1, task 3: Traffic assignment

    NASA Astrophysics Data System (ADS)

    1982-12-01

    Time plan assignment in a multibeam SS-TDMA is discussed. System features fixed by the designer, such as the number and the speed of ground terminals installed in each station, and the number and the speed of satellite transponders working in each spot are described. Linkage among terminals and transponders is also discussed, including having more than one transponder linked to one terminal. A procedure to achieve a switching plan with high efficiency, taking into account all system constraints such as no bursts breaking and two transmission rates harmonization is proposed. Algorithms to be implemented are: the Hungarian method; branch and bound; the INSERT heuristic; and the HOLE heuristic. Computer programs were developed, and a time plan for a European Satellite System is produced.

  11. LittleQuickWarp: an ultrafast image warping tool.

    PubMed

    Qu, Lei; Peng, Hanchuan

    2015-02-01

    Warping images into a standard coordinate space is critical for many image computing related tasks. However, for multi-dimensional and high-resolution images, an accurate warping operation itself is often very expensive in terms of computer memory and computational time. For high-throughput image analysis studies such as brain mapping projects, it is desirable to have high performance image warping tools that are compatible with common image analysis pipelines. In this article, we present LittleQuickWarp, a swift and memory efficient tool that boosts 3D image warping performance dramatically and at the same time has high warping quality similar to the widely used thin plate spline (TPS) warping. Compared to the TPS, LittleQuickWarp can improve the warping speed 2-5 times and reduce the memory consumption 6-20 times. We have implemented LittleQuickWarp as an Open Source plug-in program on top of the Vaa3D system (http://vaa3d.org). The source code and a brief tutorial can be found in the Vaa3D plugin source code repository. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. An energy efficient and high speed architecture for convolution computing based on binary resistive random access memory

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.

  13. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  14. Debris/ice/TPS assessment and photographic analysis for Shuttle Mission STS-33R

    NASA Technical Reports Server (NTRS)

    Stevenson, Charles G.; Katnik, Gregory N.; Higginbotham, Scott A.

    1989-01-01

    A debris/ice/Thermal Protection System (TPS) assessment and photographic analysis was conducted for Space Shuttle Mission STS-33R. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions on the external tank are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography is analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and photographic analysis of Mission STS-33R, and their overall effect on the Space Shuttle Program.

  15. Debris/ice/TPS assessment and photographic analysis for shuttle mission STS-31R

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1990-01-01

    A Debris/Ice/Thermal Protection System (TPS) assessment and photographic analysis was conducted for Space Shuttle Mission STS-31R. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions on the External Tank are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography is analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-31R, is presented along with their overall effect on the Space Shuttle Program.

  16. Debris/ice/tps Assessment and Integrated Photographic Analysis of Shuttle Mission STS-81

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Lin, Jill D.

    1997-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-81. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-81 and the resulting effect on the Space Shuttle Program.

  17. Debris/ice/tps Assessment and Integrated Photographic Analysis of Shuttle Mission STS-83

    NASA Technical Reports Server (NTRS)

    Lin, Jill D.; Katnik, Gregory N.

    1997-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-83. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-83 and the resulting effect on the Space Shuttle Program.

  18. Debris/ice/TPS assessment and integrated photographic analysis of Shuttle Mission STS-71

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Davis, J. Bradley

    1995-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-71. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanner data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-71 and the resulting effect on the Space Shuttle Program.

  19. Debris/Ice/TPS Assessment and Integrated Photographic Analysis of Shuttle Mission STS-102

    NASA Technical Reports Server (NTRS)

    Rivera, Jorge E.; Kelly, J. David (Technical Monitor)

    2001-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-102. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch were analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or inflight anomalies. This report documents the debris/ice /thermal protection system conditions and integrated photographic analysis of Space Shuttle mission STS-102 and the resulting effect on the Space Shuttle Program.

  20. Debris/Ice/TPS Assessment and Integrated Photographic Analysis of Shuttle Mission STS-94

    NASA Technical Reports Server (NTRS)

    Bowen, Barry C.; Lin, Jill D.

    1997-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-94. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-94 and the resulting effect on the Space Shuttle Program.

  1. Debris/ice/tps Assessment and Integrated Photographic Analysis of Shuttle Mission STS-79

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Lin, Jill D.

    1996-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-79. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-79 and the resulting effect on the Space Shuttle Program.

  2. Debris/ice/TPS assessment and integrated photographic analysis of Shuttle mission STS-73

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Lin, Jill D.

    1995-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-73. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanner data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle Mission STS-73 and the resulting effect on the Space Shuttle Program.

  3. Debris/Ice/TPS Assessment and Photographic Analysis for Shuttle Mission STS-38

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott A.; Davis, J. Bradley

    1991-01-01

    A debris/ice/TPS assessment and photographic analysis was conducted for the Space Shuttle Mission STS-38. Debris inspection of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the external tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-38, and their overall effect on the Space Shuttle Program are documented.

  4. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle Mission STS-50

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott A.; Davis, J. Bradley; Katnik, Gregory N.

    1992-01-01

    Thermal Protection System (TPS) assessment and integrated photographic analysis was conducted for Shuttle Mission STS-50. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the external tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The debris/ice/TPS conditions and integrated photographic analysis of Shuttle Mission STS-50, and the resulting effect on the Space Shuttle Program are documented.

  5. Debris/Ice/TPS Assessment and Integrated Photographic Analysis for Shuttle Mission STS-49

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1992-01-01

    A debris/ice/Thermal Protection System (TPS) assessment and integrated photographic analysis was conducted for Shuttle Mission STS-49. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. Debris/ice/TPS conditions and integrated photographic analysis of Shuttle Mission STS-49, and the resulting effect on the Space Shuttle Program are discussed.

  6. Debris/ice/TPS assessment and photographic analysis of shuttle mission STS-48

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott A.; Davis, J. Bradley

    1991-01-01

    A Debris/Ice/TPS assessment and photographic analysis was conducted for Space Shuttle Mission STS-48. Debris inspection of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-48 are documented, along with their overall effect on the Space Shuttle Program.

  7. Debris/Ice/TPS Assessment and Photographic Analysis for Shuttle Mission STS-37

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1991-01-01

    A Debris/Ice/TPS assessment and photographic analysis was conducted for Space Shuttle Mission STS-37. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography of launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or inflight anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-37 are documented, along with their overall effect on the Space Shuttle Program.

  8. Debris/Ice/TPS assessment and integrated photographic analysis of Shuttle Mission STS-77

    NASA Technical Reports Server (NTRS)

    Katnik, GregoryN.; Lin, Jill D. (Compiler)

    1996-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-77. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-77 and the resulting effect on the Space Shuttle Program.

  9. Debris/ice/TPS assessment and integrated photographic analysis of Shuttle Mission STS-70

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Davis, J. Bradley

    1995-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-70. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanner data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-70 and the resulting effect on the Space Shuttle Program.

  10. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle Mission STS-51

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Davis, J. Bradley

    1993-01-01

    A debris/ice/thermal protection system (TPS) assessment and integrated photographic analysis was conducted for shuttle mission STS-51. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the external tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and integrated photographic analysis of Shuttle mission STS-51 and the resulting effect on the Space Shuttle Program.

  11. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle Mission STS-55

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Davis, J. Bradley

    1993-01-01

    A Debris/Ice/TPS assessment and integrated photographic analysis was conducted for Shuttle mission STS-55. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/Frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and integrated photographic analysis of Shuttle mission STS-55, and the resulting effect on the Space Shuttle Program.

  12. Debris/ice/TPS assessment and photographic analysis for Shuttle Mission STS-36

    NASA Technical Reports Server (NTRS)

    Stevenson, Charles G.; Katnik, Gregory N.; Higginbotham, Scott A.

    1990-01-01

    A Debris/Ice/TPS (Thermal Protection System) assessment and photographic analysis was conducted for Space Shuttle Mission STS-36. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions on the External Tank are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography is analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-36, and their overall effect on the Space Shuttle Program are documented.

  13. Debris/ice/TPS assessment and integrated photographic analysis of Shuttle mission STS-69

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Davis, J. Bradley

    1995-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-69. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanner data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in flight anomalies. This report documents the ice/debris/thermal protection system condition and integrated photographic analysis of Shuttle Mission STS-69 and the resulting effect on the Space Shuttle Program.

  14. Debris/ice/TPS assessment and photographic analysis for Shuttle Mission STS-42

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1992-01-01

    A Debris/Ice/TPS (Thermal Protection System) assessment and photographic analysis was conducted for Shuttle Mission STS-42. Debris inspection of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flighr anomalies. The debris/ice/TPS conditions are documented along with photographic analysis of Mission STS-42, and their overall effect on the Space Shuttle Program.

  15. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle Mission STS-52

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1992-01-01

    A debris/ice/Thermal Protection System (TPS) assessment and integrated photographic analysis was conducted for Shuttle Mission STS-47. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the external tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and integrated photographic analysis of Shuttle Mission STS-52, and the resulting effect on the Space Shuttle Program.

  16. Debris/Ice/TPS Assessment and Integrated Photographic Analysis of Shuttle Mission STS-106

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Kelley, J. David (Technical Monitor)

    2000-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-106. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Space Shuttle mission STS-106 and the resulting effect on the Space Shuttle Program.

  17. Debris/ice/TPS assessment and photographic analysis for Shuttle Mission STS-34

    NASA Technical Reports Server (NTRS)

    Stevenson, Charles G.; Katnik, Gregory N.; Higginbotham, Scott A.

    1989-01-01

    A Debris/Ice/Thermal Protection System (TPS) assessment and photographic analysis was conducted for Space Shuttle Mission STS-34. Debris inspections of the flight elements and launch pad are performed before and after launch. Ice/frost conditions on the External Tank are assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography is analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The debris/ice/TPS conditions and photographic analysis of Mission STS-34, and their overall effect on the Space Shuttle Program are documented.

  18. Debris/Ice/TPS Assessment and Photographic Analysis for Shuttle Mission STS-41

    NASA Technical Reports Server (NTRS)

    Higginbotham, Scott A.; Davis, J. Bradley

    1990-01-01

    A Debris/Ice/Thermal Protection System (TPS) assessment and photographic analysis was conducted for Space Shuttle Mission STS-41. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. Documented here are the debris/ice/TPS conditions and photographic analysis of Mission STS-41, and their overall effect on the Space Shuttle Program.

  19. Debris/Ice/TPS assessment and integrated photographic analysis of shuttle mission STS-76

    NASA Technical Reports Server (NTRS)

    Lin, Jill D.

    1996-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-76. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-76 and the resulting effect on the Space Shuttle Program.

  20. Debris/ice/TPS assessment and integrated photographic analysis of Shuttle Mission STS-53

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1993-01-01

    A Debris/Ice/TPS assessment and integrated photographic analysis was conducted for Shuttle Mission STS-53. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/Frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and integrated photographic analysis of Shuttle Mission STS-53, and the resulting effect on the Space Shuttle Program.

  1. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle Mission STS-54

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1993-01-01

    A Debris/Ice/TPS assessment and integrated photographic analysis was conducted for Shuttle Mission STS-54. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and integrated photographic analysis of Shuttle Mission STS-54, and the resulting effect on the Space Shuttle Program.

  2. Debris/Ice/TPS assessment and integrated photographic analysis for Shuttle Mission STS-61

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Davis, J. Bradley

    1994-01-01

    A debris/ice/thermal protection system (TPS) assessment and integrated photographic analysis was conducted for shuttle mission STS-61. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the external tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/TPS conditions and integrated photographic analysis of shuttle mission STS-61, and the resulting effect on the space shuttle program.

  3. Debris/Ice/TPS Assessment and Integrated Photographic Analysis of Shuttle Mission STS-72

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Bowen, Barry C.; Lin, Jill D.

    1996-01-01

    A debris/ice/thermal protection system assessment and integrated photographic analysis was conducted for Shuttle mission STS-72. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs and infrared scanned data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the ice/debris/thermal protection system conditions and integrated photographic analysis of Shuttle mission STS-72 and the resulting effect on the Space Shuttle Program.

  4. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle mission STS-58

    NASA Technical Reports Server (NTRS)

    Davis, J. Bradley; Rivera, Jorge E.; Katnik, Gregory N.; Bowen, Barry C.; Speece, Robert F.; Rosado, Pedro J.

    1994-01-01

    A debris/ice/thermal protection system (TPS) assessment and integrated photographic analysis was conducted for Shuttle mission STS-58. Debris inspections of the flight elements and launch pad were performed before and after launch. Icing conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography of the launch was analyzed to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. The ice/debris/TPS conditions and integrated photographic analysis of Shuttle mission STS-58, and the resulting effect on the Space Shuttle Program are documented.

  5. Debris/ice/TPS assessment and integrated photographic analysis for Shuttle mission STS-47

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1992-01-01

    A debris/ice/TPS assessment and integrated photographic analysis was conducted for Shuttle Mission STS-47. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and evaluate potential vehicle damage and/or in-flight anomalies. This report documents the debris/ice/TPS conditions and integrated photographic analysis of Shuttle Mission STS-47, and the resulting effect on the Space Shuttle Program.

  6. VORCOR: A computer program for calculating characteristics of wings with edge vortex separation by using a vortex-filament and-core model

    NASA Technical Reports Server (NTRS)

    Pao, J. L.; Mehrotra, S. C.; Lan, C. E.

    1982-01-01

    A computer code base on an improved vortex filament/vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separations is developed. The code is applicable to camber wings, straked wings or wings with leading edge vortex flaps at subsonic speeds. The prediction of lifting pressure distribution and the computer time are improved by using a pair of concentrated vortex cores above the wing surface. The main features of this computer program are: (1) arbitrary camber shape may be defined and an option for exactly defining leading edge flap geometry is also provided; (2) the side edge vortex system is incorporated.

  7. High-Speed Photonic Reservoir Computing Using a Time-Delay-Based Architecture: Million Words per Second Classification

    NASA Astrophysics Data System (ADS)

    Larger, Laurent; Baylón-Fuentes, Antonio; Martinenghi, Romain; Udaltsov, Vladimir S.; Chembo, Yanne K.; Jacquot, Maxime

    2017-01-01

    Reservoir computing, originally referred to as an echo state network or a liquid state machine, is a brain-inspired paradigm for processing temporal information. It involves learning a "read-out" interpretation for nonlinear transients developed by high-dimensional dynamics when the latter is excited by the information signal to be processed. This novel computational paradigm is derived from recurrent neural network and machine learning techniques. It has recently been implemented in photonic hardware for a dynamical system, which opens the path to ultrafast brain-inspired computing. We report on a novel implementation involving an electro-optic phase-delay dynamics designed with off-the-shelf optoelectronic telecom devices, thus providing the targeted wide bandwidth. Computational efficiency is demonstrated experimentally with speech-recognition tasks. State-of-the-art speed performances reach one million words per second, with very low word error rate. Additionally, to record speed processing, our investigations have revealed computing-efficiency improvements through yet-unexplored temporal-information-processing techniques, such as simultaneous multisample injection and pitched sampling at the read-out compared to information "write-in".

  8. A cooperative program to stimulate student involvement through the MIT Undergraduate Research Opportunity Program

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Flow characteristics in the low speed Wright Brothers Wind Tunnel were studied. Calculations to check the precision of the tunnel were conducted. A program for generating computational grids around an airfoil was developed and compared with the wind tunnel model. Low Reynolds number flow phenomenon of periodic vortex shedding in a wake were also studied by applying a hot-wire anomemeter.

  9. Efficient Computation of Separation-Compliant Speed Advisories for Air Traffic Arriving in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.

    2012-01-01

    A class of problems in air traffic management asks for a scheduling algorithm that supplies the air traffic services authority not only with a schedule of arrivals and departures, but also with speed advisories. Since advisories must be finite, a scheduling algorithm must ultimately produce a finite data set, hence must either start with a purely discrete model or involve a discretization of a continuous one. The former choice, often preferred for intuitive clarity, naturally leads to mixed-integer programs, hindering proofs of correctness and computational cost bounds (crucial for real-time operations). In this paper, a hybrid control system is used to model air traffic scheduling, capturing both the discrete and continuous aspects. This framework is applied to a class of problems, called the Fully Routed Nominal Problem. We prove a number of geometric results on feasible schedules and use these results to formulate an algorithm that attempts to compute a collective speed advisory, effectively finite, and has computational cost polynomial in the number of aircraft. This work is a first step toward optimization and models refined with more realistic detail.

  10. Supercomputer implementation of finite element algorithms for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.

    1986-01-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.

  11. Design optimization of high-speed proprotor aircraft

    NASA Technical Reports Server (NTRS)

    Schleicher, David R.; Phillips, James D.; Carbajal, Kevin B.

    1993-01-01

    NASA's high-speed rotorcraft (HSRC) studies have the objective of investigating technology for vehicles that have both low downwash velocities and forward flight speed capability of up to 450 knots. This paper investigates a tilt rotor, a tilt wing, and a folding tilt rotor designed for a civil transport mission. Baseline aircraft models using current technology are developed for each configuration using a vertical/short takeoff and landing (V/STOL) aircraft design synthesis computer program to generate converged vehicle designs. Sensitivity studies and numerical optimization are used to illustrate each configuration's key design tradeoffs and constraints. Minimization of the gross takeoff weight is used as the optimization objective function. Several advanced technologies are chosen, and their relative impact on future configurational development is discussed. Finally, the impact of maximum cruise speed on vehicle figures of merit (gross weight, productivity, and direct operating cost) is analyzed. The three most important conclusions from the study are payload ratios for these aircraft will be commensurate with current fixed-wing commuter aircraft; future tilt rotors and tilt wings will be significantly lighter, more productive, and cheaper than competing folding tilt rotors; and the most promising technologies are an advanced-technology proprotor for both tilt rotor and tilt wing and advanced structural materials for the folding tilt rotor.

  12. Conjugate gradient optimization programs for shuttle reentry

    NASA Technical Reports Server (NTRS)

    Powers, W. F.; Jacobson, R. A.; Leonard, D. A.

    1972-01-01

    Two computer programs for shuttle reentry trajectory optimization are listed and described. Both programs use the conjugate gradient method as the optimization procedure. The Phase 1 Program is developed in cartesian coordinates for a rotating spherical earth, and crossrange, downrange, maximum deceleration, total heating, and terminal speed, altitude, and flight path angle are included in the performance index. The programs make extensive use of subroutines so that they may be easily adapted to other atmospheric trajectory optimization problems.

  13. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  14. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  15. Logic programming to predict cell fate patterns and retrodict genotypes in organogenesis.

    PubMed

    Hall, Benjamin A; Jackson, Ethan; Hajnal, Alex; Fisher, Jasmin

    2014-09-06

    Caenorhabditis elegans vulval development is a paradigm system for understanding cell differentiation in the process of organogenesis. Through temporal and spatial controls, the fate pattern of six cells is determined by the competition of the LET-23 and the Notch signalling pathways. Modelling cell fate determination in vulval development using state-based models, coupled with formal analysis techniques, has been established as a powerful approach in predicting the outcome of combinations of mutations. However, computing the outcomes of complex and highly concurrent models can become prohibitive. Here, we show how logic programs derived from state machines describing the differentiation of C. elegans vulval precursor cells can increase the speed of prediction by four orders of magnitude relative to previous approaches. Moreover, this increase in speed allows us to infer, or 'retrodict', compatible genomes from cell fate patterns. We exploit this technique to predict highly variable cell fate patterns resulting from dig-1 reduced-function mutations and let-23 mosaics. In addition to the new insights offered, we propose our technique as a platform for aiding the design and analysis of experimental data. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  16. Supersonic reacting internal flowfields

    NASA Astrophysics Data System (ADS)

    Drummond, J. P.

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  17. Supersonic reacting internal flow fields

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1989-01-01

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  18. The Use of a Computer-Based Reading Rate Development Program on Pre-University Intermediate Level ESL Learners' Reading Speeds

    ERIC Educational Resources Information Center

    Haupt, John

    2015-01-01

    Improving L2 learners reading fluency has been identified by leading L2 reading researchers as an important aspect of L2 reading instruction (Grabe, 2004, 2009; Nation, 2009). A number of studies have been conducted on the use of paper-based fluency development methods on ESL and EFL students reading speeds and showed positive results (Al-Homoud…

  19. High Speed White Dwarf Asteroseismology with the Herty Hall Cluster

    NASA Astrophysics Data System (ADS)

    Gray, Aaron; Kim, A.

    2012-01-01

    Asteroseismology is the process of using observed oscillations of stars to infer their interior structure. In high speed asteroseismology, we complete that by quickly computing hundreds of thousands of models to match the observed period spectra. Each model on a single processor takes five to ten seconds to run. Therefore, we use a cluster of sixteen Dell Workstations with dual-core processors. The computers use the Ubuntu operating system and Apache Hadoop software to manage workloads.

  20. Options for Accelerating Economic Recovery after Nuclear Attack. Volume 3

    DTIC Science & Technology

    1979-07-01

    speed of data processing. It really ought to be possible to program computers with likely locations of needs, and then locations of ablebodied people...that a number of existing programs and institutions were imple- mented when public concerns over the risk of nuclear war were considerably higher...natural disasters are funded as programs if such programs would also be appropriate to the post-nuclear attack situation. This logic has a compelling

  1. Lubrication of optimized-design tapered-roller bearings to 2.4 million DN

    NASA Technical Reports Server (NTRS)

    Parker, R. J.; Pinel, S. I.; Signer, Hans R.

    1980-01-01

    The performance of 120.65 mm (4.75 in.) bore high speed design, tapered roller bearings was investigated at shaft speeds to 20,000 rpm (2.4 million DN) under combined thrust and radial load. The test bearing design was computer optimized for high speed operation. Temperature distribution bearing heat generation were determined as a function of shaft speed, radial and thrust loads, lubricant flow rates, and lubricant inlet temperature. The high speed design, tapered roller bearing operated successfully at shaft speeds up to 20,000 rpm under heavy thrust and radial loads. Bearing temperatures and heat generation with the high speed design bearing were significantly less than those of a modified standard bearing tested previously. Cup cooling was effective in decreasing the high cup temperatures to levels equal to the cone temperature.

  2. Debris/ice/TPS assessment and photographic analysis for Shuttle Mission STS-43

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, James Bradley

    1991-01-01

    A debris/ice Thermal Protection System (TPS) assessment and photographic analysis was conducted for Space Station Mission STS-43. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice/frost conditions on the External Tank (ET) were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice/debris sources and to evaluate potential vehicle damage and/or in-flight anomalies.

  3. Debris/Ice/TPS Assessment and Photographic Analysis for Shuttle Mission STS-40

    NASA Technical Reports Server (NTRS)

    Katnik, Gregory N.; Higginbotham, Scott A.; Davis, J. Bradley

    1991-01-01

    A debris, ice, Thermal Protection System (TPS) assessment and photographic analysis for Space Shuttle Mission STS-40 was conducted. Debris inspections of the flight elements and launch pad were performed before and after launch. Ice and frost conditions on the External Tank were assessed by the use of computer programs, nomographs, and infrared scanner data during cryogenic loading of the vehicle, followed by on-pad visual inspection. High speed photography was analyzed after launch to identify ice and debris sources and to evaluate potential vehicle damage and/or in-flight anomalies.

  4. The F-18 systems research aircraft facility

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.

    1992-01-01

    To help ensure that new aerospace initiatives rapidly transition to competitive U.S. technologies, NASA Dryden Flight Research Facility has dedicated a systems research aircraft facility. The primary goal is to accelerate the transition of new aerospace technologies to commercial, military, and space vehicles. Key technologies include more-electric aircraft concepts, fly-by-light systems, flush airdata systems, and advanced computer architectures. Future aircraft that will benefit are the high-speed civil transport and the National AeroSpace Plane. This paper describes the systems research aircraft flight research vehicle and outlines near-term programs.

  5. Design and Test of Fan/Nacelle Models Quiet High-Speed Fan

    NASA Technical Reports Server (NTRS)

    Miller, Christopher J. (Technical Monitor); Weir, Donald

    2003-01-01

    The Quiet High-Speed Fan program is a cooperative effort between Honeywell Engines & Systems (formerly AlliedSignal Engines & Systems) and the NASA Glenn Research Center. Engines & Systems has designed an advanced high-speed fan that will be tested on the Ultra High Bypass Propulsion Simulator in the NASA Glenn 9 x 15 foot wind tunnel, currently scheduled for the second quarter of 2000. An Engines & Systems modern fan design will be used as a baseline. A nacelle model is provided that is characteristic of a typical, modern regional aircraft nacelle and meets all of the program test objectives.

  6. High-Speed Digital Scan Converter for High-Frequency Ultrasound Sector Scanners

    PubMed Central

    Chang, Jin Ho; Yen, Jesse T.; Shung, K. Kirk

    2008-01-01

    This paper presents a high-speed digital scan converter (DSC) capable of providing more than 400 images per second, which is necessary to examine the activities of the mouse heart whose rate is 5–10 beats per second. To achieve the desired high-speed performance in cost-effective manner, the DSC developed adopts a linear interpolation algorithm in which two nearest samples to each object pixel of a monitor are selected and only angular interpolation is performed. Through computer simulation with the Field II program, its accuracy was investigated by comparing it to that of bilinear interpolation known as the best algorithm in terms of accuracy and processing speed. The simulation results show that the linear interpolation algorithm is capable of providing an acceptable image quality, which means that the difference of the root mean square error (RMSE) values of the linear and bilinear interpolation algorithms is below 1 %, if the sample rate of the envelope samples is at least four times higher than the Nyquist rate for the baseband component of echo signals. The designed DSC was implemented with a single FPGA (Stratix EP1S60F1020C6, Altera Corporation, San Jose, CA) on a DSC board that is a part of a high-speed ultrasound imaging system developed. The temporal and spatial resolutions of the implemented DSC were evaluated by examining its maximum processing time with a time stamp indicating when an image is completely formed and wire phantom testing, respectively. The experimental results show that the implemented DSC is capable of providing images at the rate of 400 images per second with negligible processing error. PMID:18430449

  7. Vascular surgical data registries for small computers.

    PubMed

    Kaufman, J L; Rosenberg, N

    1984-08-01

    Recent designs for computer-based vascular surgical registries and clinical data bases have employed large centralized systems with formal programming and mass storage. Small computers, of the types created for office use or for word processing, now contain sufficient speed and memory storage capacity to allow construction of decentralized office-based registries. Using a standardized dictionary of terms and a method of data organization adapted to word processing, we have created a new vascular surgery data registry, "VASREG." Data files are organized without programming, and a limited number of powerful logical statements in English are used for sorting. The capacity is 25,000 records with current inexpensive memory technology. VASREG is adaptable to computers made by a variety of manufacturers, and interface programs are available for conversion of the word processor formated registry data into forms suitable for analysis by programs written in a standard programming language. This is a low-cost clinical data registry available to any physician. With a standardized dictionary, preparation of regional and national statistical summaries may be facilitated.

  8. A comparison of Wortmann airfoil computer-generated lift and drag polars with flight and wind tunnel results

    NASA Technical Reports Server (NTRS)

    Bowers, A. H.; Sim, A. G.

    1984-01-01

    Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared with both wind tunnel and flight test results. Excellent correlation was shown to exist between computations and flight results except when separated flow regimes were encountered. Smoothness of the input coordinates to the PROFILE computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel flight results.

  9. Preliminary compressor design study for an advanced multistage axial flow compressor

    NASA Technical Reports Server (NTRS)

    Marman, H. V.; Marchant, R. D.

    1976-01-01

    An optimum, axial flow, high pressure ratio compressor for a turbofan engine was defined for commercial subsonic transport service starting in the late 1980's. Projected 1985 technologies were used and applied to compressors with an 18:1 pressure ratio having 6 to 12 stages. A matrix of 49 compressors was developed by statistical techniques. The compressors were evaluated by means of computer programs in terms of various airline economic figures of merit such as return on investment and direct-operating cost. The optimum configuration was determined to be a high speed, 8-stage compressor with an average blading aspect ratio of 1.15.

  10. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  11. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  12. Technology needs for high speed rotorcraft (3)

    NASA Technical Reports Server (NTRS)

    Detore, Jack; Conway, Scott

    1991-01-01

    The spectrum of vertical takeoff and landing (VTOL) type aircraft is examined to determine which aircraft are most likely to achieve high subsonic cruise speeds and have hover qualities similar to a helicopter. Two civil mission profiles are considered: a 600-n.mi. mission for a 15- and a 30-passenger payload. Applying current technology, only the 15- and 30-passenger tiltfold aircraft are capable of attaining the 450-knot design goal. The two tiltfold aircraft at 450 knots and a 30-passenger tiltrotor at 375 knots were further developed for the Task II technology analysis. A program called High-Speed Total Envelope Proprotor (HI-STEP) is recommended to meet several of these issues based on the tiltrotor concept. A program called Tiltfold System (TFS) is recommended based on the tiltrotor concept. A task is identified to resolve the best design speed from productivity and demand considerations based on the technology that emerges from the recommended programs. HI-STEP's goals are to investigate propulsive efficiency, maneuver loads, and aeroelastic stability. Programs currently in progress that may meet the other technology needs include the Integrated High Performance Turbine Engine Technology (IHPTET) (NASA Lewis) and the Advanced Structural Concepts Program funded through NASA Langley.

  13. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  14. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  15. Numerical solution of the Navier-Stokes equations by discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Krasnov, M. M.; Kuchugov, P. A.; E Ladonkina, M.; E Lutsky, A.; Tishkin, V. F.

    2017-02-01

    Detailed unstructured grids and numerical methods of high accuracy are frequently used in the numerical simulation of gasdynamic flows in areas with complex geometry. Galerkin method with discontinuous basis functions or Discontinuous Galerkin Method (DGM) works well in dealing with such problems. This approach offers a number of advantages inherent to both finite-element and finite-difference approximations. Moreover, the present paper shows that DGM schemes can be viewed as Godunov method extension to piecewise-polynomial functions. As is known, DGM involves significant computational complexity, and this brings up the question of ensuring the most effective use of all the computational capacity available. In order to speed up the calculations, operator programming method has been applied while creating the computational module. This approach makes possible compact encoding of mathematical formulas and facilitates the porting of programs to parallel architectures, such as NVidia CUDA and Intel Xeon Phi. With the software package, based on DGM, numerical simulations of supersonic flow past solid bodies has been carried out. The numerical results are in good agreement with the experimental ones.

  16. Overview of the Cranked-Arrow Wing Aerodynamics Project International

    NASA Technical Reports Server (NTRS)

    Obara, Clifford J.; Lamar, John E.

    2008-01-01

    This paper provides a brief history of the F-16XL-1 aircraft, its role in the High Speed Research program and how it was morphed into the Cranked Arrow Wing Aerodynamics Project. Various flight, wind-tunnel and Computational Fluid Dynamics data sets were generated as part of the project. These unique and open flight datasets for surface pressures, boundary-layer profiles and skin-friction distributions, along with surface flow data, are described and sample data comparisons given. This is followed by a description of how the project became internationalized to be known as Cranked Arrow Wing Aerodynamics Project International and is concluded by an introduction to the results of a four year computational predictive study of data collected at flight conditions by participating researchers.

  17. Quantitative analysis of defects in silicon. Silicon sheet growth development for the large are silicon sheet task of the low-cost solar array project

    NASA Technical Reports Server (NTRS)

    Natesh, R.; Smith, J. M.; Bruce, T.; Oidwai, H. A.

    1980-01-01

    One hundred and seventy four silicon sheet samples were analyzed for twin boundary density, dislocation pit density, and grain boundary length. Procedures were developed for the quantitative analysis of the twin boundary and dislocation pit densities using a QTM-720 Quantitative Image Analyzing system. The QTM-720 system was upgraded with the addition of a PDP 11/03 mini-computer with dual floppy disc drive, a digital equipment writer high speed printer, and a field-image feature interface module. Three versions of a computer program that controls the data acquisition and analysis on the QTM-720 were written. Procedures for the chemical polishing and etching were also developed.

  18. Inflight IFR procedures simulator

    NASA Technical Reports Server (NTRS)

    Parker, L. C. (Inventor)

    1984-01-01

    An inflight IFR procedures simulator for generating signals and commands to conventional instruments provided in an airplane is described. The simulator includes a signal synthesizer which generates predetermined simulated signals corresponding to signals normally received from remote sources upon being activated. A computer is connected to the signal synthesizer and causes the signal synthesizer to produce simulated signals responsive to programs fed into the computer. A switching network is connected to the signal synthesizer, the antenna of the aircraft, and navigational instruments and communication devices for selectively connecting instruments and devices to the synthesizer and disconnecting the antenna from the navigational instruments and communication device. Pressure transducers are connected to the altimeter and speed indicator for supplying electrical signals to the computer indicating the altitude and speed of the aircraft. A compass is connected for supply electrical signals for the computer indicating the heading of the airplane. The computer upon receiving signals from the pressure transducer and compass, computes the signals that are fed to the signal synthesizer which, in turn, generates simulated navigational signals.

  19. A distributed version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.; Curlett, Brian P.

    1993-01-01

    Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.

  20. Computer program for design analysis of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1976-01-01

    A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.

  1. A comparison of computer-generated lift and drag polars for a Wortmann airfoil to flight and wind tunnel results

    NASA Technical Reports Server (NTRS)

    Bowers, A. H.; Sandlin, D. R.

    1984-01-01

    Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared to both wind tunnel and flight results. Excellent correlation is shown to exist between computations and flight results except when separated flow regimes were encountered. Wind tunnel transition locations are shown to agree with computed predictions. Smoothness of the input coordinates to the PROFILE airfoil analysis computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel results.

  2. Human Response to Simulated Low-Intensity Sonic Booms

    NASA Technical Reports Server (NTRS)

    Sullivan, Brenda M.

    2004-01-01

    NASA's High Speed Research (HSR ) program in the 1990s was intended to develop a technology base for a future High-Speed Civil Transport (HSCT). As part of this program, the NASA Langley Research Center sonic boom simulator (SBS) was built and used for a series of tests on subjective response to sonic booms. At the end of the HSR program, an HSCT was deemed impractical, but since then interest in supersonic flight has reawakened, this time focusing on a smaller aircraft suitable for a business jet. To respond to this interest, the Langley sonic boom simulator has been refurbished. The upgraded computer-controlled playback system is based on an SGI O2 computer, in place of the previous DEC MicroVAX. As the frequency response of the booth is not flat, an equalization filter is required. Because of the changes made during the renovation (new loudspeakers), the previous equalization filter no longer performed as well as before, so a new equalization filter has been designed. Booms to be presented in the booth are preprocessed using the filter. When the preprocessed signals are presented into the booth and measured with a microphone, the results are very similar to the intended shapes. Signals with short rise times and sharp "corners" are observed to have a small amount of "ringing" in the response. During the HSR program a considerable number of subjective tests were completed in the SBS. A summary of that research is given in Leatherwood et al. (Individual reports are available at http://techreports.larc.nasa.gov/ltrs/ltrs.html.) Topics of study included shaped sonic booms, asymmetrical booms, realistic (recorded) boom waveforms, indoor and outdoor booms shapes, among other factors. One conclusion of that research was that a loudness metric, like the Stevens Perceived Level (PL), predicted human reaction much more accurately than overpressure or unweighted sound pressure level. Structural vibration and rattle were not included in these studies.

  3. Numerical Simulation of High-Speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Jaberi, F. A.; Colucci, P. J.; James, S.; Givi, P.

    1996-01-01

    The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES) methods for computational analysis of high-speed reacting turbulent flows. We have just completed the first year of Phase 3 of this research.

  4. Effect of Speed on Tire-Soil Interaction and Development of Towed Pneumatic Tire-Soil Model

    DTIC Science & Technology

    1974-10-01

    rigid wheels were per- formed by several researchers under laboratory conditions (Refs. 20 through 22) using the flash X -ray technique. These experiments...Towed Tire-Soil Model ................................... 90 IX Conclusions and Recommendations .............. 95 X References...Velocity Fields ................................. A-1 x Section Page Appendix B - Computer Program Chart for Computation 3- of Tire Performance with

  5. Research on an optoelectronic measurement system of dynamic envelope measurement for China Railway high-speed train

    NASA Astrophysics Data System (ADS)

    Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun

    2018-01-01

    The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.

  6. Design of a Fatigue Detection System for High-Speed Trains Based on Driver Vigilance Using a Wireless Wearable EEG.

    PubMed

    Zhang, Xiaoliang; Li, Jiali; Liu, Yugang; Zhang, Zutao; Wang, Zhuojun; Luo, Dianyuan; Zhou, Xiang; Zhu, Miankuan; Salman, Waleed; Hu, Guangdi; Wang, Chunbai

    2017-03-01

    The vigilance of the driver is important for railway safety, despite not being included in the safety management system (SMS) for high-speed train safety. In this paper, a novel fatigue detection system for high-speed train safety based on monitoring train driver vigilance using a wireless wearable electroencephalograph (EEG) is presented. This system is designed to detect whether the driver is drowsiness. The proposed system consists of three main parts: (1) a wireless wearable EEG collection; (2) train driver vigilance detection; and (3) early warning device for train driver. In the first part, an 8-channel wireless wearable brain-computer interface (BCI) device acquires the locomotive driver's brain EEG signal comfortably under high-speed train-driving conditions. The recorded data are transmitted to a personal computer (PC) via Bluetooth. In the second step, a support vector machine (SVM) classification algorithm is implemented to determine the vigilance level using the Fast Fourier transform (FFT) to extract the EEG power spectrum density (PSD). In addition, an early warning device begins to work if fatigue is detected. The simulation and test results demonstrate the feasibility of the proposed fatigue detection system for high-speed train safety.

  7. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  8. EX6AFS: A data acquisition system for high-speed dispersive EXAFS measurements implemented using object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Jennings, Guy; Lee, Peter L.

    1995-02-01

    In this paper we describe the design and implementation of a computerized data-acquisition system for high-speed energy-dispersive EXAFS experiments on the X6A beamline at the National Synchrotron Light Source. The acquisition system drives the stepper motors used to move the components of the experimental setup and controls the readout of the EXAFS spectra. The system runs on a Macintosh IIfx computer and is written entirely in the object-oriented language C++. Large segments of the system are implemented by means of commercial class libraries, specifically the MacApp application framework from Apple, the Rogue Wave class library, and the Hierarchical Data Format datafile format library from the National Center for Supercomputing Applications. This reduces the amount of code that must be written and enhances reliability. The system makes use of several advanced features of C++: Multiple inheritance allows the code to be decomposed into independent software components and the use of exception handling allows the system to be much more reliable in the event of unexpected errors. Object-oriented techniques allow the program to be extended easily as new requirements develop. All sections of the program related to a particular concept are located in a small set of source files. The program will also be used as a prototype for future software development plans for the Basic Energy Science Synchrotron Radiation Center Collaborative Access Team beamlines being designed and built at the Advanced Photon Source.

  9. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  10. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  11. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  12. The Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, D. M.; Littman, M. G.

    1986-01-01

    The Navier-Stokes computer (NSC) has been developed for solving problems in fluid mechanics involving complex flow simulations that require more speed and capacity than provided by current and proposed Class VI supercomputers. The machine is a parallel processing supercomputer with several new architectural elements which can be programmed to address a wide range of problems meeting the following criteria: (1) the problem is numerically intensive, and (2) the code makes use of long vectors. A simulation of two-dimensional nonsteady viscous flows is presented to illustrate the architecture, programming, and some of the capabilities of the NSC.

  13. Measure the Earth's Radius and the Speed of Light with Simple and Inexpensive Computer-Based Experiments

    ERIC Educational Resources Information Center

    Martin, Michael J.

    2004-01-01

    With new and inexpensive computer-based methods, measuring the speed of light and the Earth's radius--historically difficult endeavors--can be simple enough to be tackled by high school and college students working in labs that have limited budgets. In this article, the author describes two methods of estimating the Earth's radius using two…

  14. High-speed extended-term time-domain simulation for online cascading analysis of power system

    NASA Astrophysics Data System (ADS)

    Fu, Chuan

    A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.

  15. National High-Performance Computing and Networking Act. Report To Accompany S. 343, Senate, 102d Congess, 1st Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Energy and Natural Resources.

    The purpose of the bill (S. 343), as reported by the Senate Committee on Energy and Natural Resources, is to establish a federal commitment to the advancement of high-performance computing, improve interagency planning and coordination of federal high-performance computing and networking activities, authorize a national high-speed computer…

  16. Efficient calculation of general Voigt profiles

    NASA Astrophysics Data System (ADS)

    Cope, D.; Khoury, R.; Lovett, R. J.

    1988-02-01

    An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.

  17. Amoeba-inspired nanoarchitectonic computing implemented using electrical Brownian ratchets.

    PubMed

    Aono, M; Kasai, S; Kim, S-J; Wakabayashi, M; Miwa, H; Naruse, M

    2015-06-12

    In this study, we extracted the essential spatiotemporal dynamics that allow an amoeboid organism to solve a computationally demanding problem and adapt to its environment, thereby proposing a nature-inspired nanoarchitectonic computing system, which we implemented using a network of nanowire devices called 'electrical Brownian ratchets (EBRs)'. By utilizing the fluctuations generated from thermal energy in nanowire devices, we used our system to solve the satisfiability problem, which is a highly complex combinatorial problem related to a wide variety of practical applications. We evaluated the dependency of the solution search speed on its exploration parameter, which characterizes the fluctuation intensity of EBRs, using a simulation model of our system called 'AmoebaSAT-Brownian'. We found that AmoebaSAT-Brownian enhanced the solution searching speed dramatically when we imposed some constraints on the fluctuations in its time series and it outperformed a well-known stochastic local search method. These results suggest a new computing paradigm, which may allow high-speed problem solving to be implemented by interacting nanoscale devices with low power consumption.

  18. An atlas of monthly mean distributions of SSMI surface wind speed, ARGOS buoy drift, AVHRR/2 sea surface temperature, and ECMWF surface wind components during 1990

    NASA Technical Reports Server (NTRS)

    Halpern, D.; Knauss, W.; Brown, O.; Wentz, F.

    1993-01-01

    The following monthly mean global distributions for 1990 are proposed with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States (US) Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the advanced very high resolution radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) spacecraft; Cartesian components of free drifting buoys which are tracked by the ARGOS navigation system on NOAA satellites; and Cartesian components on the 10-m height wind vector computed by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of monthly mean value, sampling distribution, and standard deviation values are displayed. Annual mean distributions are displayed.

  19. An atlas of monthly mean distributions of SSMI surface wind speed, ARGOS buoy drift, AVHRR/2 sea surface temperature, and ECMWF surface wind components during 1991

    NASA Technical Reports Server (NTRS)

    Halpern, D.; Knauss, W.; Brown, O.; Wentz, F.

    1993-01-01

    The following monthly mean global distributions for 1991 are presented with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the advanced very high resolution radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) spacecraft; Cartesian components of free-drifting buoys which are tracked by the ARGOS navigation system on NOAA satellites; and Cartesian components of the 10-m height wind vector computed by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of monthly mean value, sampling distribution, and standard deviation value are displayed. Annual mean distributions are displayed.

  20. Computational Aerodynamic Simulations of a 1484 ft/sec Tip Speed Quiet High-Speed Fan System Model for Acoustic Methods Assessment and Development

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.

    2014-01-01

    Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.

Top