Yoo, Won-Gyu
2015-01-01
[Purpose] This study showed the effects of different computer typing speeds on acceleration and peak contact pressure of the fingertips during computer typing. [Subjects] Twenty-one male computer workers voluntarily consented to participate in this study. They consisted of 7 workers who could type 200-300 characteristics/minute, 7 workers who could type 300-400 characteristics/minute, and 7 workers who could type 400-500 chracteristics/minute. [Methods] This study was used to measure the acceleration and peak contact pressure of the fingertips for different typing speed groups using an accelerometer and CONFORMat system. [Results] The fingertip contact pressure was increased in the high typing speed group compared with the low and medium typing speed groups. The fingertip acceleration was increased in the high typing speed group compared with the low and medium typing speed groups. [Conclusion] The results of the present study indicate that a fast typing speed cause continuous pressure stress to be applied to the fingers, thereby creating pain in the fingers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil, M.B.
1995-03-01
This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.
Proceedings from the conference on high speed computing: High speed computing and national security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirons, K.P.; Vigil, M.; Carlson, R.
1997-07-01
This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.
Computer Output Microfilm and Library Catalogs.
ERIC Educational Resources Information Center
Meyer, Richard W.
Early computers dealt with mathematical and scientific problems requiring very little input and not much output, therefore high speed printing devices were not required. Today with increased variety of use, high speed printing is necessary and Computer Output Microfilm (COM) devices have been created to meet this need. This indirect process can…
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Information Management and Technology Div.
This report was prepared in response to a request from the Senate Committee on Commerce, Science, and Transportation, and from the House Committee on Science, Space, and Technology, for information on efforts to develop high-speed computer networks in the United States, Europe (limited to France, Germany, Italy, the Netherlands, and the United…
High speed television camera system processes photographic film data for digital computer analysis
NASA Technical Reports Server (NTRS)
Habbal, N. A.
1970-01-01
Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.
Networking via wireless bridge produces greater speed and flexibility, lowers cost.
1998-10-01
Wireless computer networking. Computer connectivity is essential in today's high-tech health care industry. But telephone lines aren't fast enough, and high-speed connections like T-1 lines are costly. Read about an Ohio community hospital that installed a wireless network "bridge" to connect buildings that are miles apart, creating a reliable high-speed link that costs one-tenth of a T-1 line.
Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers
NASA Technical Reports Server (NTRS)
Guruswamy, Guru; VanDalsem, William (Technical Monitor)
1994-01-01
Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This document contains the transcript of three hearings on the High Speed Performance Computing and High Speed Networking Applications Act of 1993 (H.R. 1757). The hearings were designed to obtain specific suggestions for improvements to the legislation and alternative or additional application areas that should be pursued. Testimony and prepared…
Computer Analysis Of High-Speed Roller Bearings
NASA Technical Reports Server (NTRS)
Coe, H.
1988-01-01
High-speed cylindrical roller-bearing analysis program (CYBEAN) developed to compute behavior of cylindrical rolling-element bearings at high speeds and with misaligned shafts. With program, accurate assessment of geometry-induced roller preload possible for variety of out-ring and housing configurations and loading conditions. Enables detailed examination of bearing performance and permits exploration of causes and consequences of bearing skew. Provides general capability for assessment of designs of bearings supporting main shafts of engines. Written in FORTRAN IV.
DOT National Transportation Integrated Search
1995-01-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety cortical functions in high-speed rail or magnetic levitation ...
Data Processing Aspects of MEDLARS
Austin, Charles J.
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files. PMID:14119287
DATA PROCESSING ASPECTS OF MEDLARS.
AUSTIN, C J
1964-01-01
The speed and volume requirements of MEDLARS necessitate the use of high-speed data processing equipment, including paper-tape typewriters, a digital computer, and a special device for producing photo-composed output. Input to the system is of three types: variable source data, including citations from the literature and search requests; changes to such master files as the medical subject headings list and the journal record file; and operating instructions such as computer programs and procedures for machine operators. MEDLARS builds two major stores of data on magnetic tape. The Processed Citation File includes bibliographic citations in expanded form for high-quality printing at periodic intervals. The Compressed Citation File is a coded, time-sequential citation store which is used for high-speed searching against demand request input. Major design considerations include converting variable-length, alphanumeric data to mechanical form quickly and accurately; serial searching by the computer within a reasonable period of time; high-speed printing that must be of graphic quality; and efficient maintenance of various complex computer files.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing
NASA Astrophysics Data System (ADS)
Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.
2014-12-01
After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.
Propeller speed and phase sensor
NASA Technical Reports Server (NTRS)
Collopy, Paul D. (Inventor); Bennett, George W. (Inventor)
1992-01-01
A speed and phase sensor counterrotates aircraft propellers. A toothed wheel is attached to each propeller, and the teeth trigger a sensor as they pass, producing a sequence of signals. From the sequence of signals, rotational speed of each propeller is computer based on time intervals between successive signals. The speed can be computed several times during one revolution, thus giving speed information which is highly up-to-date. Given that spacing between teeth may not be uniform, the signals produced may be nonuniform in time. Error coefficients are derived to correct for nonuniformities in the resulting signals, thus allowing accurate speed to be computed despite the spacing nonuniformities. Phase can be viewed as the relative rotational position of one propeller with respect to the other, but measured at a fixed time. Phase is computed from the signals.
Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei
2017-01-01
Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.
A self-synchronized high speed computational ghost imaging system: A leap towards dynamic capturing
NASA Astrophysics Data System (ADS)
Suo, Jinli; Bian, Liheng; Xiao, Yudong; Wang, Yongjin; Zhang, Lei; Dai, Qionghai
2015-11-01
High quality computational ghost imaging needs to acquire a large number of correlated measurements between the to-be-imaged scene and different reference patterns, thus ultra-high speed data acquisition is of crucial importance in real applications. To raise the acquisition efficiency, this paper reports a high speed computational ghost imaging system using a 20 kHz spatial light modulator together with a 2 MHz photodiode. Technically, the synchronization between such high frequency illumination and bucket detector needs nanosecond trigger precision, so the development of synchronization module is quite challenging. To handle this problem, we propose a simple and effective computational self-synchronization scheme by building a general mathematical model and introducing a high precision synchronization technique. The resulted efficiency is around 14 times faster than state-of-the-arts, and takes an important step towards ghost imaging of dynamic scenes. Besides, the proposed scheme is a general approach with high flexibility for readily incorporating other illuminators and detectors.
ERIC Educational Resources Information Center
Newby, Gregory B.
Information technologies such as computer mediated communication (CMC), virtual reality, and telepresence can provide the communication flow required by high-speed management techniques that high-technology industries have adopted in response to changes in the climate of competition. Intra-corporate CMC might be used for a variety of purposes…
Peregrine System Configuration | High-Performance Computing | NREL
nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a
Tse computers. [ultrahigh speed optical processing for two dimensional binary image
NASA Technical Reports Server (NTRS)
Schaefer, D. H.; Strong, J. P., III
1977-01-01
An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.
DOT National Transportation Integrated Search
1997-06-01
This report describes analysis tools to predict shift under high-speed vehicle- : track interaction. The analysis approach is based on two fundamental models : developed (as part of this research); the first model computes the track lateral : residua...
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.
1992-01-01
The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.
NASA Technical Reports Server (NTRS)
Aggarwal, Arun K.
1993-01-01
Spherical roller bearings have typically been used in applications with speeds limited to about 5000 rpm and loads limited for operation at less than about 0.25 million DN. However, spherical roller bearings are now being designed for high load and high speed applications including aerospace applications. A computer program, SASHBEAN, was developed to provide an analytical tool to design, analyze, and predict the performance of high speed, single row, angular contact (including zero contact angle), spherical roller bearings. The material presented is the mathematical formulation and analytical methods used to develop computer program SASHBEAN. For a given set of operating conditions, the program calculates the bearings ring deflections (axial and radial), roller deflections, contact areas stresses, depth and magnitude of maximum shear stresses, axial thrust, rolling element and cage rotational speeds, lubrication parameters, fatigue lives, and rates of heat generation. Centrifugal forces and gyroscopic moments are fully considered. The program is also capable of performing steady-state and time-transient thermal analyses of the bearing system.
Civil propulsion technology for the next twenty-five years
NASA Technical Reports Server (NTRS)
Rosen, Robert; Facey, John R.
1987-01-01
The next twenty-five years will see major advances in civil propulsion technology that will result in completely new aircraft systems for domestic, international, commuter and high-speed transports. These aircraft will include advanced aerodynamic, structural, and avionic technologies resulting in major new system capabilities and economic improvements. Propulsion technologies will include high-speed turboprops in the near term, very high bypass ratio turbofans, high efficiency small engines and advanced cycles utilizing high temperature materials for high-speed propulsion. Key fundamental enabling technologies include increased temperature capability and advanced design methods. Increased temperature capability will be based on improved composite materials such as metal matrix, intermetallics, ceramics, and carbon/carbon as well as advanced heat transfer techniques. Advanced design methods will make use of advances in internal computational fluid mechanics, reacting flow computation, computational structural mechanics and computational chemistry. The combination of advanced enabling technologies, new propulsion concepts and advanced control approaches will provide major improvements in civil aircraft.
High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects
NASA Technical Reports Server (NTRS)
Schutt-Aine, Jose E.
1996-01-01
The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.
Facilities | Integrated Energy Solutions | NREL
strategies needed to optimize our entire energy system. A photo of the high-performance computer at NREL . High-Performance Computing Data Center High-performance computing facilities at NREL provide high-speed
Fault tolerant computer control for a Maglev transportation system
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George
1994-01-01
Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.
High-Speed Photography with Computer Control.
ERIC Educational Resources Information Center
Winters, Loren M.
1991-01-01
Describes the use of a microcomputer as an intervalometer for the control and timing of several flash units to photograph high-speed events. Applies this technology to study the oscillations of a stretched rubber band, the deceleration of high-speed projectiles in water, the splashes of milk drops, and the bursts of popcorn kernels. (MDH)
High Performance Computing Meets Energy Efficiency - Continuum Magazine |
NREL High Performance Computing Meets Energy Efficiency High Performance Computing Meets Energy turbines. Simulation by Patrick J. Moriarty and Matthew J. Churchfield, NREL The new High Performance Computing Data Center at the National Renewable Energy Laboratory (NREL) hosts high-speed, high-volume data
High-speed pulse-shape generator, pulse multiplexer
Burkhart, Scott C.
2002-01-01
The invention combines arbitrary amplitude high-speed pulses for precision pulse shaping for the National Ignition Facility (NIF). The circuitry combines arbitrary height pulses which are generated by replicating scaled versions of a trigger pulse and summing them delayed in time on a pulse line. The combined electrical pulses are connected to an electro-optic modulator which modulates a laser beam. The circuit can also be adapted to combine multiple channels of high speed data into a single train of electrical pulses which generates the optical pulses for very high speed optical communication. The invention has application in laser pulse shaping for inertial confinement fusion, in optical data links for computers, telecommunications, and in laser pulse shaping for atomic excitation studies. The invention can be used to effect at least a 10.times. increase in all fiber communication lines. It allows a greatly increased data transfer rate between high-performance computers. The invention is inexpensive enough to bring high-speed video and data services to homes through a super modem.
NASA Astrophysics Data System (ADS)
Wu, Huaying; Wang, Li Zhong; Wang, Yantao; Yuan, Xiaolei
2018-05-01
The blade or surface grinding blade of the hypervelocity grinding wheel may be damaged due to too high rotation rate of the spindle of the machine and then fly out. Its speed as a projectile may severely endanger the field persons. Critical thickness model of the protective plate of the high-speed machine is studied in this paper. For easy analysis, the shapes of the possible impact objects flying from the high-speed machine are simplified as sharp-nose model, ball-nose model and flat-nose model. Whose front ending shape to represent point, line and surface contacting. Impact analysis based on J-C model is performed for the low-carbon steel plate with different thicknesses in this paper. One critical thickness computational model for the protective plate of high-speed machine is established according to the damage characteristics of the thin plate to get relation among plate thickness and mass, shape and size and impact speed of impact object. The air cannon is used for impact test. The model accuracy is validated. This model can guide identification of the thickness of single-layer outer protective plate of a high-speed machine.
A large high vacuum, high pumping speed space simulation chamber for electric propulsion
NASA Technical Reports Server (NTRS)
Grisnik, Stanley P.; Parkes, James E.
1994-01-01
Testing high power electric propulsion devices poses unique requirements on space simulation facilities. Very high pumping speeds are required to maintain high vacuum levels while handling large volumes of exhaust products. These pumping speeds are significantly higher than those available in most existing vacuum facilities. There is also a requirement for relatively large vacuum chamber dimensions to minimize facility wall/thruster plume interactions and to accommodate far field plume diagnostic measurements. A 4.57 m (15 ft) diameter by 19.2 m (63 ft) long vacuum chamber at NASA Lewis Research Center is described. The chamber utilizes oil diffusion pumps in combination with cryopanels to achieve high vacuum pumping speeds at high vacuum levels. The facility is computer controlled for all phases of operation from start-up, through testing, to shutdown. The computer control system increases the utilization of the facility and reduces the manpower requirements needed for facility operations.
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds
NASA Technical Reports Server (NTRS)
Boppe, Charles W.
1987-01-01
A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.
NASA Astrophysics Data System (ADS)
Kotov, D. V.; Yee, H. C.; Wray, A. A.; Sjögreen, Björn; Kritsuk, A. G.
2018-01-01
The authors regret for the typographic errors that were made in equation (4) and missing phrase after equation (4) in the article "Numerical dissipation control in high order shock-capturing schemes for LES of low speed flows" [J. Comput. Phys. 307 (2016) 189-202].
Computational fluid dynamics research
NASA Technical Reports Server (NTRS)
Chandra, Suresh; Jones, Kenneth; Hassan, Hassan; Mcrae, David Scott
1992-01-01
The focus of research in the computational fluid dynamics (CFD) area is two fold: (1) to develop new approaches for turbulence modeling so that high speed compressible flows can be studied for applications to entry and re-entry flows; and (2) to perform research to improve CFD algorithm accuracy and efficiency for high speed flows. Research activities, faculty and student participation, publications, and financial information are outlined.
An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment.
Huang, Chien-Feng; Li, Hsu-Chih
2017-01-01
The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications.
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Transient Simulation of Ram Accelerator Flowfields
1993-01-01
PROPULSIVE FLOWS WITH COMIBUSTION CHEMISTRY, ADVANCED TURBULENCE MODELS - 1ŕ. Reference: Dash, "Advanced Computational Models for Analyzing High - Speed...coupled, implicit manner. Near-wall effects have been dealt with via the low Reynolds number formulation of Chien: and the recent model of Rodi.3 High ...July 1989. 19 Dash, S.M., ’Advanced Computational Models for Analyzing High -Speed Propulsive Flowficlds,’ 199M J&ý,NAF Propulsion Meeting, CPIA Pub. 550
Ways of achieving continuous service from computers
NASA Technical Reports Server (NTRS)
Quinn, M. J., Jr.
1974-01-01
This paper outlines the methods used in the real-time computer complex to keep computers operating. Methods include selectover, high-speed restart, and low-speed restart. The hardware and software needed to implement these methods is discussed as well as the system recovery facility, alternate device support, and timeout. In general, methods developed while supporting the Gemini, Apollo, and Skylab space missions are presented.
The Impact of High-Speed Internet Connectivity at Home on Eighth-Grade Student Achievement
ERIC Educational Resources Information Center
Kingston, Kent J.
2013-01-01
In the fall of 2008 Westside Community Schools - District 66, in Omaha, Nebraska implemented a one-to-one notebook computer take home model for all eighth-grade students. The purpose of this study was to determine the effect of a required yearlong one-to-one notebook computer program supported by high-speed Internet connectivity at school on (a)…
Scientific Visualization in High Speed Network Environments
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kutler, Paul (Technical Monitor)
1997-01-01
In several cases, new visualization techniques have vastly increased the researcher's ability to analyze and comprehend data. Similarly, the role of networks in providing an efficient supercomputing environment have become more critical and continue to grow at a faster rate than the increase in the processing capabilities of supercomputers. A close relationship between scientific visualization and high-speed networks in providing an important link to support efficient supercomputing is identified. The two technologies are driven by the increasing complexities and volume of supercomputer data. The interaction of scientific visualization and high-speed networks in a Computational Fluid Dynamics simulation/visualization environment are given. Current capabilities supported by high speed networks, supercomputers, and high-performance graphics workstations at the Numerical Aerodynamic Simulation Facility (NAS) at NASA Ames Research Center are described. Applied research in providing a supercomputer visualization environment to support future computational requirements are summarized.
High performance computing and communications program
NASA Technical Reports Server (NTRS)
Holcomb, Lee
1992-01-01
A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.
Design and Operating Characteristics of High-Speed, Small-Bore Cylindrical-Roller Bearings
NASA Technical Reports Server (NTRS)
Pinel, Stanley, I.; Signer, Hans R.; Zaretsky, Erwin V.
2000-01-01
The computer program SHABERTH was used to analyze 35-mm-bore cylindrical roller bearings designed and manufactured for high-speed turbomachinery applications. Parametric tests of the bearings were conducted on a high-speed, high-temperature bearing tester and the results were compared with the computer predictions. Bearings with a channeled inner ring were lubricated through the inner ring, while bearings with a channeled outer ring were lubricated with oil jets. Tests were run with and without outer-ring cooling. The predicted bearing life decreased with increasing speed because of increased contact stresses caused by centrifugal load. Lower temperatures, less roller skidding, and lower power losses were obtained with channeled inner rings. Power losses calculated by the SHABERTH computer program correlated reasonably well with the test results. The Parker formula for XCAV (used in SHABERTH as a measure of oil volume in the bearing cavity) needed to be adjusted to reflect the prevailing operating conditions. The XCAV formula will need to be further refined to reflect roller bearing lubrication, ring design, cage design, and location of the cage-controlling land.
IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1993-01-01
Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.
Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.
Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi
2017-03-01
Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.
Design of a high-speed digital processing element for parallel simulation
NASA Technical Reports Server (NTRS)
Milner, E. J.; Cwynar, D. S.
1983-01-01
A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.
NASA Astrophysics Data System (ADS)
Guo, Jie; Zhu, Chang`an
2016-01-01
The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.
Recent Advances and Issues in Computers. Oryx Frontiers of Science Series.
ERIC Educational Resources Information Center
Gay, Martin K.
Discussing recent issues in computer science, this book contains 11 chapters covering: (1) developments that have the potential for changing the way computers operate, including microprocessors, mass storage systems, and computing environments; (2) the national computational grid for high-bandwidth, high-speed collaboration among scientists, and…
NASA Astrophysics Data System (ADS)
Moser, Stefan; Nau, Siegfried; Salk, Manfred; Thoma, Klaus
2014-02-01
The in situ investigation of dynamic events, ranging from car crash to ballistics, often is key to the understanding of dynamic material behavior. In many cases the important processes and interactions happen on the scale of milli- to microseconds at speeds of 1000 m s-1 or more. Often, 3D information is necessary to fully capture and analyze all relevant effects. High-speed 3D-visualization techniques are thus required for the in situ analysis. 3D-capable optical high-speed methods often are impaired by luminous effects and dust, while flash x-ray based methods usually deliver only 2D data. In this paper, a novel 3D-capable flash x-ray based method, in situ flash x-ray high-speed computed tomography is presented. The method is capable of producing 3D reconstructions of high-speed processes based on an undersampled dataset consisting of only a few (typically 3 to 6) x-ray projections. The major challenges are identified, discussed and the chosen solution outlined. The application is illustrated with an exemplary application of a 1000 m s-1 high-speed impact event on the scale of microseconds. A quantitative analysis of the in situ measurement of the material fragments with a 3D reconstruction with 1 mm voxel size is presented and the results are discussed. The results show that the HSCT method allows gaining valuable visual and quantitative mechanical information for the understanding and interpretation of high-speed events.
2013-05-01
logic to perform control function computations and are connected to the full authority digital engine control ( FADEC ) via a high-speed data...Digital Engine Control ( FADEC ) via a high speed data communication bus. The short term distributed engine control configu- rations will be core...concen- trator; and high temperature electronics, high speed communication bus between the data concentrator and the control law processor master FADEC
Synchronizing Photography For High-Speed-Engine Research
NASA Technical Reports Server (NTRS)
Chun, K. S.
1989-01-01
Light flashes when shaft reaches predetermined angle. Synchronization system facilitates visualization of flow in high-speed internal-combustion engines. Designed for cinematography and holographic interferometry, system synchronizes camera and light source with predetermined rotational angle of engine shaft. 10-bit resolution of absolute optical shaft encoder adapted, and 2 to tenth power combinations of 10-bit binary data computed to corresponding angle values. Pre-computed angle values programmed into EPROM's (erasable programmable read-only memories) to use as angle lookup table. Resolves shaft angle to within 0.35 degree at rotational speeds up to 73,240 revolutions per minute.
Nearly Interactive Parabolized Navier-Stokes Solver for High Speed Forebody and Inlet Flows
NASA Technical Reports Server (NTRS)
Benson, Thomas J.; Liou, May-Fun; Jones, William H.; Trefny, Charles J.
2009-01-01
A system of computer programs is being developed for the preliminary design of high speed inlets and forebodies. The system comprises four functions: geometry definition, flow grid generation, flow solver, and graphics post-processor. The system runs on a dedicated personal computer using the Windows operating system and is controlled by graphical user interfaces written in MATLAB (The Mathworks, Inc.). The flow solver uses the Parabolized Navier-Stokes equations to compute millions of mesh points in several minutes. Sample two-dimensional and three-dimensional calculations are demonstrated in the paper.
The science of computing - Parallel computation
NASA Technical Reports Server (NTRS)
Denning, P. J.
1985-01-01
Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.
An Evolutionary Method for Financial Forecasting in Microscopic High-Speed Trading Environment
Li, Hsu-Chih
2017-01-01
The advancement of information technology in financial applications nowadays have led to fast market-driven events that prompt flash decision-making and actions issued by computer algorithms. As a result, today's markets experience intense activity in the highly dynamic environment where trading systems respond to others at a much faster pace than before. This new breed of technology involves the implementation of high-speed trading strategies which generate significant portion of activity in the financial markets and present researchers with a wealth of information not available in traditional low-speed trading environments. In this study, we aim at developing feasible computational intelligence methodologies, particularly genetic algorithms (GA), to shed light on high-speed trading research using price data of stocks on the microscopic level. Our empirical results show that the proposed GA-based system is able to improve the accuracy of the prediction significantly for price movement, and we expect this GA-based methodology to advance the current state of research for high-speed trading and other relevant financial applications. PMID:28316618
NASA Technical Reports Server (NTRS)
Byrne, F.
1981-01-01
Time-shared interface speeds data processing in distributed computer network. Two-level high-speed scanning approach routes information to buffer, portion of which is reserved for series of "first-in, first-out" memory stacks. Buffer address structure and memory are protected from noise or failed components by error correcting code. System is applicable to any computer or processing language.
An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, Y.; Yang, Y.
In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.
Promoting High-Performance Computing and Communications. A CBO Study.
ERIC Educational Resources Information Center
Webre, Philip
In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
NASA Astrophysics Data System (ADS)
Larger, Laurent; Baylón-Fuentes, Antonio; Martinenghi, Romain; Udaltsov, Vladimir S.; Chembo, Yanne K.; Jacquot, Maxime
2017-01-01
Reservoir computing, originally referred to as an echo state network or a liquid state machine, is a brain-inspired paradigm for processing temporal information. It involves learning a "read-out" interpretation for nonlinear transients developed by high-dimensional dynamics when the latter is excited by the information signal to be processed. This novel computational paradigm is derived from recurrent neural network and machine learning techniques. It has recently been implemented in photonic hardware for a dynamical system, which opens the path to ultrafast brain-inspired computing. We report on a novel implementation involving an electro-optic phase-delay dynamics designed with off-the-shelf optoelectronic telecom devices, thus providing the targeted wide bandwidth. Computational efficiency is demonstrated experimentally with speech-recognition tasks. State-of-the-art speed performances reach one million words per second, with very low word error rate. Additionally, to record speed processing, our investigations have revealed computing-efficiency improvements through yet-unexplored temporal-information-processing techniques, such as simultaneous multisample injection and pitched sampling at the read-out compared to information "write-in".
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
High Speed White Dwarf Asteroseismology with the Herty Hall Cluster
NASA Astrophysics Data System (ADS)
Gray, Aaron; Kim, A.
2012-01-01
Asteroseismology is the process of using observed oscillations of stars to infer their interior structure. In high speed asteroseismology, we complete that by quickly computing hundreds of thousands of models to match the observed period spectra. Each model on a single processor takes five to ten seconds to run. Therefore, we use a cluster of sixteen Dell Workstations with dual-core processors. The computers use the Ubuntu operating system and Apache Hadoop software to manage workloads.
Lubrication of optimized-design tapered-roller bearings to 2.4 million DN
NASA Technical Reports Server (NTRS)
Parker, R. J.; Pinel, S. I.; Signer, Hans R.
1980-01-01
The performance of 120.65 mm (4.75 in.) bore high speed design, tapered roller bearings was investigated at shaft speeds to 20,000 rpm (2.4 million DN) under combined thrust and radial load. The test bearing design was computer optimized for high speed operation. Temperature distribution bearing heat generation were determined as a function of shaft speed, radial and thrust loads, lubricant flow rates, and lubricant inlet temperature. The high speed design, tapered roller bearing operated successfully at shaft speeds up to 20,000 rpm under heavy thrust and radial loads. Bearing temperatures and heat generation with the high speed design bearing were significantly less than those of a modified standard bearing tested previously. Cup cooling was effective in decreasing the high cup temperatures to levels equal to the cone temperature.
Aeronautics research and technology program and specific objectives
NASA Technical Reports Server (NTRS)
1981-01-01
Aeronautics research and technology program objectives in fluid and thermal physics, materials and structures, controls and guidance, human factors, multidisciplinary activities, computer science and applications, propulsion, rotorcraft, high speed aircraft, subsonic aircraft, and rotorcraft and high speed aircraft systems technology are addressed.
Dynamic Test Program, Contact Power Collection for High Speed Tracked Vehicles
DOT National Transportation Integrated Search
1973-01-01
A laboratory test program is defined for determining the dynamic characteristics of a contact power collection system for a high speed tracked vehicle. The use of a hybrid computer is conjuntion with hydraulic exciters to simulate the expected dynami...
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.
Processing Device for High-Speed Execution of an Xrisc Computer Program
NASA Technical Reports Server (NTRS)
Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)
2016-01-01
A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.
NASA Aeronautics: Research and Technology Program Highlights
NASA Technical Reports Server (NTRS)
1990-01-01
This report contains numerous color illustrations to describe the NASA programs in aeronautics. The basic ideas involved are explained in brief paragraphs. The seven chapters deal with Subsonic aircraft, High-speed transport, High-performance military aircraft, Hypersonic/Transatmospheric vehicles, Critical disciplines, National facilities and Organizations & installations. Some individual aircraft discussed are : the SR-71 aircraft, aerospace planes, the high-speed civil transport (HSCT), the X-29 forward-swept wing research aircraft, and the X-31 aircraft. Critical disciplines discussed are numerical aerodynamic simulation, computational fluid dynamics, computational structural dynamics and new experimental testing techniques.
Lindberg, D A; Humphreys, B L
1995-01-01
The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116
Numerical Simulation of High-Speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Jaberi, F. A.; Colucci, P. J.; James, S.; Givi, P.
1996-01-01
The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES) methods for computational analysis of high-speed reacting turbulent flows. We have just completed the first year of Phase 3 of this research.
Zhang, Xiaoliang; Li, Jiali; Liu, Yugang; Zhang, Zutao; Wang, Zhuojun; Luo, Dianyuan; Zhou, Xiang; Zhu, Miankuan; Salman, Waleed; Hu, Guangdi; Wang, Chunbai
2017-03-01
The vigilance of the driver is important for railway safety, despite not being included in the safety management system (SMS) for high-speed train safety. In this paper, a novel fatigue detection system for high-speed train safety based on monitoring train driver vigilance using a wireless wearable electroencephalograph (EEG) is presented. This system is designed to detect whether the driver is drowsiness. The proposed system consists of three main parts: (1) a wireless wearable EEG collection; (2) train driver vigilance detection; and (3) early warning device for train driver. In the first part, an 8-channel wireless wearable brain-computer interface (BCI) device acquires the locomotive driver's brain EEG signal comfortably under high-speed train-driving conditions. The recorded data are transmitted to a personal computer (PC) via Bluetooth. In the second step, a support vector machine (SVM) classification algorithm is implemented to determine the vigilance level using the Fast Fourier transform (FFT) to extract the EEG power spectrum density (PSD). In addition, an early warning device begins to work if fatigue is detected. The simulation and test results demonstrate the feasibility of the proposed fatigue detection system for high-speed train safety.
A vectorization of the Hess McDonnell Douglas potential flow program NUED for the STAR-100 computer
NASA Technical Reports Server (NTRS)
Boney, L. R.; Smith, R. E., Jr.
1979-01-01
The computer program NUED for analyzing potential flow about arbitrary three dimensional lifting bodies using the panel method was modified to use vector operations and run on the STAR-100 computer. A high speed of computation and ability to approximate the body surface with a large number of panels are characteristics of NUEDV. The new program shows that vector operations can be readily implemented in programs of this type to increase the computational speed on the STAR-100 computer. The virtual memory architecture of the STAR-100 facilitates the use of large numbers of panels to approximate the body surface.
Internal fluid mechanics research on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.
1988-01-01
The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.
ERIC Educational Resources Information Center
Martin, Michael J.
2004-01-01
With new and inexpensive computer-based methods, measuring the speed of light and the Earth's radius--historically difficult endeavors--can be simple enough to be tackled by high school and college students working in labs that have limited budgets. In this article, the author describes two methods of estimating the Earth's radius using two…
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Energy and Natural Resources.
The purpose of the bill (S. 343), as reported by the Senate Committee on Energy and Natural Resources, is to establish a federal commitment to the advancement of high-performance computing, improve interagency planning and coordination of federal high-performance computing and networking activities, authorize a national high-speed computer…
A large-scale computer facility for computational aerodynamics
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Ballhaus, W. F., Jr.
1985-01-01
As a result of advances related to the combination of computer system technology and numerical modeling, computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. NASA has, therefore, initiated the Numerical Aerodynamic Simulation (NAS) Program with the objective to provide a basis for further advances in the modeling of aerodynamic flowfields. The Program is concerned with the development of a leading-edge, large-scale computer facility. This facility is to be made available to Government agencies, industry, and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. Attention is given to the requirements for computational aerodynamics, the principal specific goals of the NAS Program, the high-speed processor subsystem, the workstation subsystem, the support processing subsystem, the graphics subsystem, the mass storage subsystem, the long-haul communication subsystem, the high-speed data-network subsystem, and software.
Amoeba-inspired nanoarchitectonic computing implemented using electrical Brownian ratchets.
Aono, M; Kasai, S; Kim, S-J; Wakabayashi, M; Miwa, H; Naruse, M
2015-06-12
In this study, we extracted the essential spatiotemporal dynamics that allow an amoeboid organism to solve a computationally demanding problem and adapt to its environment, thereby proposing a nature-inspired nanoarchitectonic computing system, which we implemented using a network of nanowire devices called 'electrical Brownian ratchets (EBRs)'. By utilizing the fluctuations generated from thermal energy in nanowire devices, we used our system to solve the satisfiability problem, which is a highly complex combinatorial problem related to a wide variety of practical applications. We evaluated the dependency of the solution search speed on its exploration parameter, which characterizes the fluctuation intensity of EBRs, using a simulation model of our system called 'AmoebaSAT-Brownian'. We found that AmoebaSAT-Brownian enhanced the solution searching speed dramatically when we imposed some constraints on the fluctuations in its time series and it outperformed a well-known stochastic local search method. These results suggest a new computing paradigm, which may allow high-speed problem solving to be implemented by interacting nanoscale devices with low power consumption.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
Taxiing, Take-Off, and Landing Simulation of the High Speed Civil Transport Aircraft
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.
1999-01-01
The aircraft industry jointly with NASA is studying enabling technologies for higher speed, longer range aircraft configurations. Higher speeds, higher temperatures, and aerodynamics are driving these newer aircraft configurations towards long, slender, flexible fuselages. Aircraft response during ground operations, although often overlooked, is a concern due to the increased fuselage flexibility. This paper discusses modeling and simulation of the High Speed Civil Transport aircraft during taxiing, take-off, and landing. Finite element models of the airframe for various configurations are used and combined with nonlinear landing gear models to provide a simulation tool to study responses to different ground input conditions. A commercial computer simulation program is used to numerically integrate the equations of motion and to compute estimates of the responses using an existing runway profile. Results show aircraft responses exceeding safe acceptable human response levels.
An analytical model for highly seperated flow on airfoils at low speeds
NASA Technical Reports Server (NTRS)
Zunnalt, G. W.; Naik, S. N.
1977-01-01
A computer program was developed to solve the low speed flow around airfoils with highly separated flow. A new flow model included all of the major physical features in the separated region. Flow visualization tests also were made which gave substantiation to the validity of the model. The computation involves the matching of the potential flow, boundary layer and flows in the separated regions. Head's entrainment theory was used for boundary layer calculations and Korst's jet mixing analysis was used in the separated regions. A free stagnation point aft of the airfoil and a standing vortex in the separated region were modelled and computed.
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.
1988-01-01
A user's manual for the computer program developed for the prediction of propeller-nacelle aerodynamic performance reported in, An Analysis for High Speed Propeller-Nacelle Aerodynamic Performance Prediction: Volume 1 -- Theory and Application, is presented. The manual describes the computer program mode of operation requirements, input structure, input data requirements and the program output. In addition, it provides the user with documentation of the internal program structure and the software used in the computer program as it relates to the theory presented in Volume 1. Sample input data setups are provided along with selected printout of the program output for one of the sample setups.
2015-09-01
OAT) and laser-induced ultrasound tomography (LUT) to obtain coregistered maps of tissue optical absorption and speed of sound , displayed within the...computed tomography (UST) can provide high-resolution anatomical images of breast lesions based on three complementary acoustic properties (speed-of- sound ...tomography (UST) can provide high-resolution anatomical images of breast lesions based on three complementary acoustic properties (speed-of- sound
Dynamic and thermal analysis of high speed tapered roller bearings under combined loading
NASA Technical Reports Server (NTRS)
Crecelius, W. J.; Milke, D. R.
1973-01-01
The development of a computer program capable of predicting the thermal and kinetic performance of high-speed tapered roller bearings operating with fluid lubrication under applied axial, radial and moment loading (five degrees of freedom) is detailed. Various methods of applying lubrication can be considered as well as changes in bearing internal geometry which occur as the bearing is brought to operating speeds, loads and temperatures.
Ultra high speed image processing techniques. [electronic packaging techniques
NASA Technical Reports Server (NTRS)
Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.
1981-01-01
Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)
NASA Technical Reports Server (NTRS)
Dalton, Shelly D.; Daley, Philip C.
1988-01-01
As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Information Management and Technology Div.
This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…
Advanced Computing for Science.
ERIC Educational Resources Information Center
Hut, Piet; Sussman, Gerald Jay
1987-01-01
Discusses some of the contributions that high-speed computing is making to the study of science. Emphasizes the use of computers in exploring complicated systems without the simplification required in traditional methods of observation and experimentation. Provides examples of computer assisted investigations in astronomy and physics. (TW)
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Providing a parallel and distributed capability for JMASS using SPEEDES
NASA Astrophysics Data System (ADS)
Valinski, Maria; Driscoll, Jonathan; McGraw, Robert M.; Meyer, Bob
2002-07-01
The Joint Modeling And Simulation System (JMASS) is a Tri-Service simulation environment that supports engineering and engagement-level simulations. As JMASS is expanded to support other Tri-Service domains, the current set of modeling services must be expanded for High Performance Computing (HPC) applications by adding support for advanced time-management algorithms, parallel and distributed topologies, and high speed communications. By providing support for these services, JMASS can better address modeling domains requiring parallel computationally intense calculations such clutter, vulnerability and lethality calculations, and underwater-based scenarios. A risk reduction effort implementing some HPC services for JMASS using the SPEEDES (Synchronous Parallel Environment for Emulation and Discrete Event Simulation) Simulation Framework has recently concluded. As an artifact of the JMASS-SPEEDES integration, not only can HPC functionality be brought to the JMASS program through SPEEDES, but an additional HLA-based capability can be demonstrated that further addresses interoperability issues. The JMASS-SPEEDES integration provided a means of adding HLA capability to preexisting JMASS scenarios through an implementation of the standard JMASS port communication mechanism that allows players to communicate.
A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks
NASA Technical Reports Server (NTRS)
Cui, Zhenqian
1999-01-01
With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.
NASA Technical Reports Server (NTRS)
Jacklin, S. A.; Leyland, J. A.; Warmbrodt, W.
1985-01-01
Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, online graphics, and file management. This paper discusses five global design considerations which are useful to integrate array processor, multimicroprocessor, and host computer system architectures into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the nonreal-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration is briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind tunnel environment, the controller architecture can generally be applied to a wide range of automatic control applications.
Computational Aspects of Heat Transfer in Structures
NASA Technical Reports Server (NTRS)
Adelman, H. M. (Compiler)
1982-01-01
Techniques for the computation of heat transfer and associated phenomena in complex structures are examined with an emphasis on reentry flight vehicle structures. Analysis methods, computer programs, thermal analysis of large space structures and high speed vehicles, and the impact of computer systems are addressed.
Preliminary structural sizing of a Mach 3.0 high-speed civil transport model
NASA Technical Reports Server (NTRS)
Blackburn, Charles L.
1992-01-01
An analysis has been performed pertaining to the structural resizing of a candidate Mach 3.0 High Speed Civil Transport (HSCT) conceptual design using a computer program called EZDESIT. EZDESIT is a computer program which integrates the PATRAN finite element modeling program to the COMET finite element analysis program for the purpose of calculating element sizes or cross sectional dimensions. The purpose of the present report is to document the procedure used in accomplishing the preliminary structural sizing and to present the corresponding results.
A heterogeneous hierarchical architecture for real-time computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skroch, D.A.; Fornaro, R.J.
The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.
Real-time data reduction capabilities at the Langley 7 by 10 foot high speed tunnel
NASA Technical Reports Server (NTRS)
Fox, C. H., Jr.
1980-01-01
The 7 by 10 foot high speed tunnel performs a wide range of tests employing a variety of model installation methods. To support the reduction of static data from this facility, a generalized wind tunnel data reduction program had been developed for use on the Langley central computer complex. The capabilities of a version of this generalized program adapted for real time use on a dedicated on-site computer are discussed. The input specifications, instructions for the console operator, and full descriptions of the algorithms are included.
Zhang, Xiaoliang; Li, Jiali; Liu, Yugang; Zhang, Zutao; Wang, Zhuojun; Luo, Dianyuan; Zhou, Xiang; Zhu, Miankuan; Salman, Waleed; Hu, Guangdi; Wang, Chunbai
2017-01-01
The vigilance of the driver is important for railway safety, despite not being included in the safety management system (SMS) for high-speed train safety. In this paper, a novel fatigue detection system for high-speed train safety based on monitoring train driver vigilance using a wireless wearable electroencephalograph (EEG) is presented. This system is designed to detect whether the driver is drowsiness. The proposed system consists of three main parts: (1) a wireless wearable EEG collection; (2) train driver vigilance detection; and (3) early warning device for train driver. In the first part, an 8-channel wireless wearable brain-computer interface (BCI) device acquires the locomotive driver’s brain EEG signal comfortably under high-speed train-driving conditions. The recorded data are transmitted to a personal computer (PC) via Bluetooth. In the second step, a support vector machine (SVM) classification algorithm is implemented to determine the vigilance level using the Fast Fourier transform (FFT) to extract the EEG power spectrum density (PSD). In addition, an early warning device begins to work if fatigue is detected. The simulation and test results demonstrate the feasibility of the proposed fatigue detection system for high-speed train safety. PMID:28257073
GPU Particle Tracking and MHD Simulations with Greatly Enhanced Computational Speed
NASA Astrophysics Data System (ADS)
Ziemba, T.; O'Donnell, D.; Carscadden, J.; Cash, M.; Winglee, R.; Harnett, E.
2008-12-01
GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for less cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU, and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. 3-D particle tracking and MHD codes have been developed using NVIDIA's CUDA and have demonstrated speed up of nearly a factor of 20 over equivalent CPU versions of the codes. Such a speed up enables new applications to develop, including real time running of radiation belt simulations and real time running of global magnetospheric simulations, both of which could provide important space weather prediction tools.
Davis, J P; Akella, S; Waddell, P H
2004-01-01
Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.
Development of embedded real-time and high-speed vision platform
NASA Astrophysics Data System (ADS)
Ouyang, Zhenxing; Dong, Yimin; Yang, Hua
2015-12-01
Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
Mathematical and Computational Challenges in Population Biology and Ecosystems Science
NASA Technical Reports Server (NTRS)
Levin, Simon A.; Grenfell, Bryan; Hastings, Alan; Perelson, Alan S.
1997-01-01
Mathematical and computational approaches provide powerful tools in the study of problems in population biology and ecosystems science. The subject has a rich history intertwined with the development of statistics and dynamical systems theory, but recent analytical advances, coupled with the enhanced potential of high-speed computation, have opened up new vistas and presented new challenges. Key challenges involve ways to deal with the collective dynamics of heterogeneous ensembles of individuals, and to scale from small spatial regions to large ones. The central issues-understanding how detail at one scale makes its signature felt at other scales, and how to relate phenomena across scales-cut across scientific disciplines and go to the heart of algorithmic development of approaches to high-speed computation. Examples are given from ecology, genetics, epidemiology, and immunology.
Robb, P; Pawlowski, B
1990-05-01
The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.
The finite element method in low speed aerodynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.
1975-01-01
The finite element procedure is shown to be of significant impact in design of the 'computational wind tunnel' for low speed aerodynamics. The uniformity of the mathematical differential equation description, for viscous and/or inviscid, multi-dimensional subsonic flows about practical aerodynamic system configurations, is utilized to establish the general form of the finite element algorithm. Numerical results for inviscid flow analysis, as well as viscous boundary layer, parabolic, and full Navier Stokes flow descriptions verify the capabilities and overall versatility of the fundamental algorithm for aerodynamics. The proven mathematical basis, coupled with the distinct user-orientation features of the computer program embodiment, indicate near-term evolution of a highly useful analytical design tool to support computational configuration studies in low speed aerodynamics.
Investigation of television transmission using adaptive delta modulation principles
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1976-01-01
The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.
Analysis of optical route in a micro high-speed magneto-optic switch
NASA Astrophysics Data System (ADS)
Weng, Zihua; Yang, Guoguang; Huang, Yuanqing; Chen, Zhimin; Zhu, Yun; Wu, Jinming; Lin, Shufen; Mo, Weiping
2005-02-01
A novel micro high-speed 2x2 magneto-optic switch and its optical route, which is used in high-speed all-optical communication network, is designed and analyzed in this paper. The study of micro high-speed magneto-optic switch mainly involves the optical route and high-speed control technique design. The optical route design covers optical route design of polarization in optical switch, the performance analysis and material selection of magneto-optic crystal and magnetic path design in Faraday rotator. The research of high-speed control technique involves the study of nanosecond pulse generator, high-speed magnetic field and its control technique etc. High-speed current transients from nanosecond pulse generator are used to switch the magnetization of the magneto-optic crystal, which propagates a 1550nm optical beam. The optical route design schemes and electronic circuits of high-speed control technique are both simulated on computer and test by the experiments respectively. The experiment results state that the nanosecond pulse generator can output the pulse with rising edge time 3~35ns, voltage amplitude 10~90V and pulse width 10~100ns. Under the control of CPU singlechip, the optical beam can be stably switched and the switching time is less than 1μs currently.
Technological Enhancements for Personal Computers
1992-03-01
quicker order processing , shortening the time required to obtain critical spare parts. 31 Customer service and spare parts tracking are facilitated by...cards speed up order processing and filing. Bar code readers speed inventory control processing. D. DEPLOYMENT PLANNING. Many units with high mobility
Real-time image reconstruction and display system for MRI using a high-speed personal computer.
Haishi, T; Kose, K
1998-09-01
A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.
NASA Astrophysics Data System (ADS)
Mejia, C.; Badran, F.; Bentamy, A.; Crepon, M.; Thiria, S.; Tran, N.
1999-05-01
We have computed two geophysical model functions (one for the vertical and one for the horizontal polarization) for the NASA scatterometer (NSCAT) by using neural networks. These neural network geophysical model functions (NNGMFs) were estimated with NSCAT scatterometer σO measurements collocated with European Centre for Medium-Range Weather Forecasts analyzed wind vectors during the period January 15 to April 15, 1997. We performed a student t test showing that the NNGMFs estimate the NSCAT σO with a confidence level of 95%. Analysis of the results shows that the mean NSCAT signal depends on the incidence angle and the wind speed and presents the classical biharmonic modulation with respect to the wind azimuth. NSCAT σO increases with respect to the wind speed and presents a well-marked change at around 7 m s-1. The upwind-downwind amplitude is higher for the horizontal polarization signal than for vertical polarization, indicating that the use of horizontal polarization can give additional information for wind retrieval. Comparison of the σO computed by the NNGMFs against the NSCAT-measured σO show a quite low rms, except at low wind speeds. We also computed two specific neural networks for estimating the variance associated to these GMFs. The variances are analyzed with respect to geophysical parameters. This led us to compute the geophysical signal-to-noise ratio, i.e., Kp. The Kp values are quite high at low wind speed and decrease at high wind speed. At constant wind speed the highest Kp are at crosswind directions, showing that the crosswind values are the most difficult to estimate. These neural networks can be expressed as analytical functions, and FORTRAN subroutines can be provided.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
Digital intermediate frequency QAM modulator using parallel processing
Pao, Hsueh-Yuan [Livermore, CA; Tran, Binh-Nien [San Ramon, CA
2008-05-27
The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.
NASA Technical Reports Server (NTRS)
1975-01-01
The feasibility of employing thin-film heat-flux gages was studied as a method of defining boundary layer characteristics at supersonic speeds in a high speed blowdown wind tunnel. Flow visualization techniques (using oil) were employed. Tabulated data (computer printouts), a test facility description, and photographs of test equipment are given.
Optical Computers and Space Technology
NASA Technical Reports Server (NTRS)
Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela
1995-01-01
The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.
Defense Acquisitions Acronyms and Terms
2012-12-01
Computer-Aided Design CADD Computer-Aided Design and Drafting CAE Component Acquisition Executive; Computer-Aided Engineering CAIV Cost As an...Radiation to Ordnance HFE Human Factors Engineering HHA Health Hazard Assessment HNA Host-Nation Approval HNS Host-Nation Support HOL High -Order...Engineering Change Proposal VHSIC Very High Speed Integrated Circuit VLSI Very Large Scale Integration VOC Volatile Organic Compound W WAN Wide
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.
1988-01-01
A computer program, the Propeller Nacelle Aerodynamic Performance Prediction Analysis (PANPER), was developed for the prediction and analysis of the performance and airflow of propeller-nacelle configurations operating over a forward speed range inclusive of high speed flight typical of recent propfan designs. A propeller lifting line, wake program was combined with a compressible, viscous center body interaction program, originally developed for diffusers, to compute the propeller-nacelle flow field, blade loading distribution, propeller performance, and the nacelle forebody pressure and viscous drag distributions. The computer analysis is applicable to single and coaxial counterrotating propellers. The blade geometries can include spanwise variations in sweep, droop, taper, thickness, and airfoil section type. In the coaxial mode of operation the analysis can treat both equal and unequal blade number and rotational speeds on the propeller disks. The nacelle portion of the analysis can treat both free air and tunnel wall configurations including wall bleed. The analysis was applied to many different sets of flight conditions using selected aerodynamic modeling options. The influence of different propeller nacelle-tunnel wall configurations was studied. Comparisons with available test data for both single and coaxial propeller configurations are presented along with a discussion of the results.
System and Method for High-Speed Data Recording
NASA Technical Reports Server (NTRS)
Taveniku, Mikael B. (Inventor)
2017-01-01
A system and method for high speed data recording includes a control computer and a disk pack unit. The disk pack is provided within a shell that provides handling and protection for the disk packs. The disk pack unit provides cooling of the disks and connection for power and disk signaling. A standard connection is provided between the control computer and the disk pack unit. The disk pack units are self sufficient and able to connect to any computer. Multiple disk packs are connected simultaneously to the system, so that one disk pack can be active while one or more disk packs are inactive. To control for power surges, the power to each disk pack is controlled programmatically for the group of disks in a disk pack.
NASA Technical Reports Server (NTRS)
Oliver, A. B.; Lillard, R. P.; Blaisdell, G. A.; Lyrintizis, A. S.
2006-01-01
The capability of the OVERFLOW code to accurately compute high-speed turbulent boundary layers and turbulent shock-boundary layer interactions is being evaluated. Configurations being investigated include a Mach 2.87 flat plate to compare experimental velocity profiles and boundary layer growth, a Mach 6 flat plate to compare experimental surface heat transfer,a direct numerical simulation (DNS) at Mach 2.25 for turbulent quantities, and several Mach 3 compression ramps to compare computations of shock-boundary layer interactions to experimental laser doppler velocimetry (LDV) data and hot-wire data. The present paper describes outlines the study and presents preliminary results for two of the flat plate cases and two small-angle compression corner test cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weseloh, Wayne N.; Clancy, Sean P.; Painter, James W.
2010-08-01
PAGOSA is a computational fluid dynamics computer program developed at Los Alamos National Laboratory (LANL) for the study of high-speed compressible flow and high-rate material deformation. PAGOSA is a three-dimensional Eulerian finite difference code, solving problems with a wide variety of equations of state (EOSs), material strength, and explosive modeling options.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.
High-speed linear optics quantum computing using active feed-forward.
Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton
2007-01-04
As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.
Comparison of microrings and microdisks for high-speed optical modulation in silicon photonics
NASA Astrophysics Data System (ADS)
Ying, Zhoufeng; Wang, Zheng; Zhao, Zheng; Dhar, Shounak; Pan, David Z.; Soref, Richard; Chen, Ray T.
2018-03-01
The past several decades have witnessed the gradual transition from electrical to optical interconnects, ranging from long-haul telecommunication to chip-to-chip interconnects. As one type of key component in integrated optical interconnect and high-performance computing, optical modulators have been well developed these past few years, including ultrahigh-speed microring and microdisk modulators. In this paper, a comparison between microring and microdisk modulators is well analyzed in terms of dimensions, static and dynamic power consumption, and fabrication tolerance. The results show that microdisks have advantages over microrings in these aspects, which gives instructions to the chip design of high-density integrated systems for optical interconnects and optical computing.
Integrated Computer-Aided Drafting Instruction (ICADI).
ERIC Educational Resources Information Center
Chen, C. Y.; McCampbell, David H.
Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…
NASA Technical Reports Server (NTRS)
Rebbechi, Brian; Forrester, B. David; Oswald, Fred B.; Townsend, Dennis P.
1992-01-01
A comparison was made between computer model predictions of gear dynamics behavior and experimental results. The experimental data were derived from the NASA gear noise rig, which was used to record dynamic tooth loads and vibration. The experimental results were compared with predictions from the DSTO Aeronautical Research Laboratory's gear dynamics code for a matrix of 28 load speed points. At high torque the peak dynamic load predictions agree with the experimental results with an average error of 5 percent in the speed range 800 to 6000 rpm. Tooth separation (or bounce), which was observed in the experimental data for light torque, high speed conditions, was simulated by the computer model. The model was also successful in simulating the degree of load sharing between gear teeth in the multiple tooth contact region.
Advanced turboprop noise prediction based on recent theoretical results
NASA Technical Reports Server (NTRS)
Farassat, F.; Padula, S. L.; Dunn, M. H.
1987-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
Parallel Guessing: A Strategy for High-Speed Computation
1984-09-19
for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or
NASA Technical Reports Server (NTRS)
Ghaffari, Farhad
1999-01-01
Unstructured grid Euler computations, performed at supersonic cruise speed, are presented for a High Speed Civil Transport (HSCT) configuration, designated as the Technology Concept Airplane (TCA) within the High Speed Research (HSR) Program. The numerical results are obtained for the complete TCA cruise configuration which includes the wing, fuselage, empennage, diverters, and flow through nacelles at M (sub infinity) = 2.4 for a range of angles-of-attack and sideslip. Although all the present computations are performed for the complete TCA configuration, appropriate assumptions derived from the fundamental supersonic aerodynamic principles have been made to extract aerodynamic predictions to complement the experimental data obtained from a 1.675%-scaled truncated (aft fuselage/empennage components removed) TCA model. The validity of the computational results, derived from the latter assumptions, are thoroughly addressed and discussed in detail. The computed surface and off-surface flow characteristics are analyzed and the pressure coefficient contours on the wing lower surface are shown to correlate reasonably well with the available pressure sensitive paint results, particularly, for the complex flow structures around the nacelles. The predicted longitudinal and lateral/directional performance characteristics for the truncated TCA configuration are shown to correlate very well with the corresponding wind-tunnel data across the examined range of angles-of-attack and sideslip. The complementary computational results for the longitudinal and lateral/directional performance characteristics for the complete TCA configuration are also presented along with the aerodynamic effects due to empennage components. Results are also presented to assess the computational method performance, solution sensitivity to grid refinement, and solution convergence characteristics.
Rating of Dynamic Coefficient for Simple Beam Bridge Design on High-Speed Railways
NASA Astrophysics Data System (ADS)
Diachenko, Leonid; Benin, Andrey; Smirnov, Vladimir; Diachenko, Anastasia
2018-06-01
The aim of the work is to improve the methodology for the dynamic computation of simple beam spans during the impact of high-speed trains. Mathematical simulation utilizing numerical and analytical methods of structural mechanics is used in the research. The article analyses parameters of the effect of high-speed trains on simple beam spanning bridge structures and suggests a technique of determining of the dynamic index to the live load. Reliability of the proposed methodology is confirmed by results of numerical simulation of high-speed train passage over spans with different speeds. The proposed algorithm of dynamic computation is based on a connection between maximum acceleration of the span in the resonance mode of vibrations and the main factors of stress-strain state. The methodology allows determining maximum and also minimum values of the main efforts in the construction that makes possible to perform endurance tests. It is noted that dynamic additions for the components of the stress-strain state (bending moments, transverse force and vertical deflections) are different. This condition determines the necessity for differentiated approach to evaluation of dynamic coefficients performing design verification of I and II groups of limiting state. The practical importance: the methodology of determining the dynamic coefficients allows making dynamic calculation and determining the main efforts in split beam spans without numerical simulation and direct dynamic analysis that significantly reduces the labour costs for design.
On The Export Control Of High Speed Imaging For Nuclear Weapons Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, Scott Avery; Altherr, Michael Robert
Since the Manhattan Project, the use of high-speed photography, and its cousins flash radiography1 and schieleren photography have been a technological proliferation concern. Indeed, like the supercomputer, the development of high-speed photography as we now know it essentially grew out of the nuclear weapons program at Los Alamos2,3,4. Naturally, during the course of the last 75 years the technology associated with computers and cameras has been export controlled by the United States and others to prevent both proliferation among non-P5-nations and technological parity among potential adversaries among P5 nations. Here we revisit these issues as they relate to high-speed photographicmore » technologies and make recommendations about how future restrictions, if any, should be guided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO
Zhang, Chaozhu; Han, Jinan; Li, Ke
2014-01-01
The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750
NASA Technical Reports Server (NTRS)
Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.
1988-01-01
A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.
NASA Technical Reports Server (NTRS)
Rivers, S. M. B.; Wahls, R. A.; Owens, L. R.
2001-01-01
A computational study focused on leading-edge radius effects and associated Reynolds number sensitivity for a High Speed Civil Transport configuration at transonic conditions was conducted as part of NASA's High Speed Research Program. The primary purposes were to assess the capabilities of computational fluid dynamics to predict Reynolds number effects for a range of leading-edge radius distributions on a second-generation supersonic transport configuration, and to evaluate the potential performance benefits of each at the transonic cruise condition. Five leading-edge radius distributions are described, and the potential performance benefit including the Reynolds number sensitivity for each is presented. Computational results for two leading-edge radius distributions are compared with experimental results acquired in the National Transonic Facility over a broad Reynolds number range.
An assessment and application of turbulence models for hypersonic flows
NASA Technical Reports Server (NTRS)
Coakley, T. J.; Viegas, J. R.; Huang, P. G.; Rubesin, M. W.
1990-01-01
The current approach to the Accurate Computation of Complex high-speed flows is to solve the Reynolds averaged Navier-Stokes equations using finite difference methods. An integral part of this approach consists of development and applications of mathematical turbulence models which are necessary in predicting the aerothermodynamic loads on the vehicle and the performance of the propulsion plant. Computations of several high speed turbulent flows using various turbulence models are described and the models are evaluated by comparing computations with the results of experimental measurements. The cases investigated include flows over insulated and cooled flat plates with Mach numbers ranging from 2 to 8 and wall temperature ratios ranging from 0.2 to 1.0. The turbulence models investigated include zero-equation, two-equation, and Reynolds-stress transport models.
Real-time fuzzy inference based robot path planning
NASA Technical Reports Server (NTRS)
Pacini, Peter J.; Teichrow, Jon S.
1990-01-01
This project addresses the problem of adaptive trajectory generation for a robot arm. Conventional trajectory generation involves computing a path in real time to minimize a performance measure such as expended energy. This method can be computationally intensive, and it may yield poor results if the trajectory is weakly constrained. Typically some implicit constraints are known, but cannot be encoded analytically. The alternative approach used here is to formulate domain-specific knowledge, including implicit and ill-defined constraints, in terms of fuzzy rules. These rules utilize linguistic terms to relate input variables to output variables. Since the fuzzy rulebase is determined off-line, only high-level, computationally light processing is required in real time. Potential applications for adaptive trajectory generation include missile guidance and various sophisticated robot control tasks, such as automotive assembly, high speed electrical parts insertion, stepper alignment, and motion control for high speed parcel transfer systems.
PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
2014-01-01
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
A History of High-Performance Computing
NASA Technical Reports Server (NTRS)
2006-01-01
Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.
Tools for 3D scientific visualization in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
DOT National Transportation Integrated Search
1995-09-01
This report describes the development of a methodology designed to assure that a sufficiently high level of safety is achieved and maintained in computer-based systems which perform safety critical functions in high-speed rail or magnetic levitation ...
High Speed Computing, LANs, and WAMs
NASA Technical Reports Server (NTRS)
Bergman, Larry A.; Monacos, Steve
1994-01-01
Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.
The application of high-speed cinematography for the quantitative analysis of equine locomotion.
Fredricson, I; Drevemo, S; Dalin, G; Hjertën, G; Björne, K
1980-04-01
Locomotive disorders constitute a serious problem in horse racing which will only be rectified by a better understanding of the causative factors associated with disturbances of gait. This study describes a system for the quantitative analysis of the locomotion of horses at speed. The method is based on high-speed cinematography with a semi-automatic system of analysis of the films. The recordings are made with a 16 mm high-speed camera run at 500 frames per second (fps) and the films are analysed by special film-reading equipment and a mini-computer. The time and linear gait variables are presented in tabular form and the angles and trajectories of the joints and body segments are presented graphically.
High Resolution Wind Direction and Speed Information for Support of Fire Operations
B.W. Butler; J.M. Forthofer; M.A. Finney; L.S. Bradshaw; R. Stratton
2006-01-01
Computational Fluid Dynamics (CFD) technology has been used to model wind speed and direction in mountainous terrain at a relatively high resolution compared to other readily available technologies. The process termed âgridded windâ is not a forecast, but rather represents a method for calculating the influence of terrain on general wind flows. Gridded wind simulations...
Finite Element Analysis in Concurrent Processing: Computational Issues
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett
2004-01-01
The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.
ERIC Educational Resources Information Center
Hakerem, Gita; And Others
The Water and Molecular Networks (WAMNet) Project uses graduate student written Reduced Instruction Set Computing (RISC) computer simulations of the molecular structure of water to assist high school students learn about the nature of water. This study examined: (1) preconceptions concerning the molecular structure of water common among high…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Maxine D.; Leigh, Jason
2014-02-17
The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less
Parallelism in Manipulator Dynamics. Revision.
1983-12-01
computing the motor torques required to drive a lower-pair kinematic chain (e.g., a typical manipulator arm in free motion, or a mechanical leg in the... computations , and presents two "mathematically exact" formulationsespecially suited to high-speed, highly parallel implementa- tions using special-purpose...YNAMICS by I(IIAR) IIAROLI) LATIROP .4ISTRACT This paper addresses the problem of efficiently computing the motor torques required to drive a lower-pair
Seeing the forest for the trees: Networked workstations as a parallel processing computer
NASA Technical Reports Server (NTRS)
Breen, J. O.; Meleedy, D. M.
1992-01-01
Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.
Design applications for supercomputers
NASA Technical Reports Server (NTRS)
Studerus, C. J.
1987-01-01
The complexity of codes for solutions of real aerodynamic problems has progressed from simple two-dimensional models to three-dimensional inviscid and viscous models. As the algorithms used in the codes increased in accuracy, speed and robustness, the codes were steadily incorporated into standard design processes. The highly sophisticated codes, which provide solutions to the truly complex flows, require computers with large memory and high computational speed. The advent of high-speed supercomputers, such that the solutions of these complex flows become more practical, permits the introduction of the codes into the design system at an earlier stage. The results of several codes which either were already introduced into the design process or are rapidly in the process of becoming so, are presented. The codes fall into the area of turbomachinery aerodynamics and hypersonic propulsion. In the former category, results are presented for three-dimensional inviscid and viscous flows through nozzle and unducted fan bladerows. In the latter category, results are presented for two-dimensional inviscid and viscous flows for hypersonic vehicle forebodies and engine inlets.
High Speed Computational Ghost Imaging via Spatial Sweeping
NASA Astrophysics Data System (ADS)
Wang, Yuwang; Liu, Yang; Suo, Jinli; Situ, Guohai; Qiao, Chang; Dai, Qionghai
2017-03-01
Computational ghost imaging (CGI) achieves single-pixel imaging by using a Spatial Light Modulator (SLM) to generate structured illuminations for spatially resolved information encoding. The imaging speed of CGI is limited by the modulation frequency of available SLMs, and sets back its practical applications. This paper proposes to bypass this limitation by trading off SLM’s redundant spatial resolution for multiplication of the modulation frequency. Specifically, a pair of galvanic mirrors sweeping across the high resolution SLM multiply the modulation frequency within the spatial resolution gap between SLM and the final reconstruction. A proof-of-principle setup with two middle end galvanic mirrors achieves ghost imaging as fast as 42 Hz at 80 × 80-pixel resolution, 5 times faster than state-of-the-arts, and holds potential for one magnitude further multiplication by hardware upgrading. Our approach brings a significant improvement in the imaging speed of ghost imaging and pushes ghost imaging towards practical applications.
Computer Graphics in Research: Some State -of-the-Art Systems
ERIC Educational Resources Information Center
Reddy, R.; And Others
1975-01-01
A description is given of the structure and functional characteristics of three types of interactive computer graphic systems, developed by the Department of Computer Science at Carnegie-Mellon; a high-speed programmable display capable of displaying 50,000 short vectors, flicker free; a shaded-color video display for the display of gray-scale…
An information retrieval system for research file data
Joan E. Lengel; John W. Koning
1978-01-01
Research file data have been successfully retrieved at the Forest Products Laboratory through a high-speed cross-referencing system involving the computer program FAMULUS as modified by the Madison Academic Computing Center at the University of Wisconsin. The method of data input, transfer to computer storage, system utilization, and effectiveness are discussed....
Soviet Cybernetics Review, Vol. 3, No. 9, September 1969.
ERIC Educational Resources Information Center
Holland, Wade B., Ed.
The issue features articles and photographs of computers displayed at the Automation-69 Exhibition in Moscow, especially the Mir-1 and Ruta-110. Also discussed are the Doza analog computer for radiological dosage; 'on-the-fly' output printers; other ways to increase computer speed and productivity; and the planned ultra-high-energy 1000-Bev…
Detonation Propagation in Slabs and Axisymmetric Rate Sticks
NASA Astrophysics Data System (ADS)
Romick, Christopher; Aslam, Tariq
Insensitive high explosives (IHE) have many benefits; however, these IHEs exhibit longer reaction zones than more conventional high explosives (HE). This makes IHEs less ideal explosives and more susceptible to edge effects as well as other performance degradation issues. Thus, there is a resulting reduction in the detonation speed within the explosive. Many HE computational models, e. g. WSD, SURF, CREST, have shock-dependent reaction rates. This dependency places a high value on having an accurate shock speed. In the common practice of shock-capturing, there is ambiguity in the shock-state due to smoothing of the shock-front. Moreover, obtaining an accurate shock speed with shock-capturing becomes prohibitively computationally expensive in multiple dimensions. The use of shock-fitting removes the ambiguity of the shock-state as it is one of the boundaries. As such, the required resolution for a given error in the detonation speed is less than with shock-capturing. This allows for further insight into performance degradation. A two-dimensional shock-fitting scheme has been developed for unconfined slabs and rate sticks of HE. The HE modeling is accomplished by Euler equations utilizing several models with single-step irreversible kinetics in slab and rate stick geometries. Department of Energy - LANL.
Goldstone R/D High Speed Data Acquisition System
NASA Technical Reports Server (NTRS)
Deutsch, L. J.; Jurgens, R. F.; Brokl, S. S.
1984-01-01
A digital data acquisition system that meets the requirements of several users (initially the planetary radar program) is planned for general use at Deep Space Station 14 (DSS 14). The system, now partially complete, is controlled by VAX 11/780 computer that is programmed in high level languages. A DEC Data Controller is included for moderate-speed data acquisition, low speed data display, and for a digital interface to special user-provided devices. The high-speed data acquisition is performed in devices that are being designed and built at JPL. Analog IF signals are converted to a digitized 50 MHz real signal. This signal is filtered and mixed digitally to baseband after which its phase code (a PN sequence in the case of planetary radar) is removed. It may then be accumulated (or averaged) and fed into the VAX through an FPS 5210 array processor. Further data processing before entering the VAX is thus possible (computation and accumulation of the power spectra, for example). The system is to be located in the research and development pedestal at DSS 14 for easy access by researchers in radio astronomy as well as telemetry processing and antenna arraying.
Butera, R J; Wilson, C G; Delnegro, C A; Smith, J C
2001-12-01
We present a novel approach to implementing the dynamic-clamp protocol (Sharp et al., 1993), commonly used in neurophysiology and cardiac electrophysiology experiments. Our approach is based on real-time extensions to the Linux operating system. Conventional PC-based approaches have typically utilized single-cycle computational rates of 10 kHz or slower. In thispaper, we demonstrate reliable cycle-to-cycle rates as fast as 50 kHz. Our system, which we call model reference current injection (MRCI); pronounced merci is also capable of episodic logging of internal state variables and interactive manipulation of model parameters. The limiting factor in achieving high speeds was not processor speed or model complexity, but cycle jitter inherent in the CPU/motherboard performance. We demonstrate these high speeds and flexibility with two examples: 1) adding action-potential ionic currents to a mammalian neuron under whole-cell patch-clamp and 2) altering a cell's intrinsic dynamics via MRCI while simultaneously coupling it via artificial synapses to an internal computational model cell. These higher rates greatly extend the applicability of this technique to the study of fast electrophysiological currents such fast a currents and fast excitatory/inhibitory synapses.
Cascaded VLSI neural network architecture for on-line learning
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)
1992-01-01
High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.
Household computer and Internet access: The digital divide in a pediatric clinic population
Carroll, Aaron E.; Rivara, Frederick P.; Ebel, Beth; Zimmerman, Frederick J.; Christakis, Dimitri A.
2005-01-01
Past studies have noted a digital divide, or inequality in computer and Internet access related to socioeconomic class. This study sought to measure how many households in a pediatric primary care outpatient clinic had household access to computers and the Internet, and whether this access differed by socio-economic status or other demographic information. We conducted a phone survey of a population-based sample of parents with children ages 0 to 11 years old. Analyses assessed predictors of having home access to a computer, the Internet, and high-speed Internet service. Overall, 88.9% of all households owned a personal computer, and 81.4% of all households had Internet access. Among households with Internet access, 48.3% had high speed Internet at home. There were statistically significant associations between parental income or education and home computer ownership and Internet access. However, the impact of this difference was lessened by the fact that over 60% of families with annual household income of $10,000–$25,000, and nearly 70% of families with only a high-school education had Internet access at home. While income and education remain significant predictors of household computer and internet access, many patients and families at all economic levels have access, and might benefit from health promotion interventions using these modalities. PMID:16779012
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, M. H.; Padula, S. L.
1986-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
NASA Astrophysics Data System (ADS)
Ohene-Kwofie, Daniel; Otoo, Ekow
2015-10-01
The ATLAS detector, operated at the Large Hadron Collider (LHC) records proton-proton collisions at CERN every 50ns resulting in a sustained data flow up to PB/s. The upgraded Tile Calorimeter of the ATLAS experiment will sustain about 5PB/s of digital throughput. These massive data rates require extremely fast data capture and processing. Although there has been a steady increase in the processing speed of CPU/GPGPU assembled for high performance computing, the rate of data input and output, even under parallel I/O, has not kept up with the general increase in computing speeds. The problem then is whether one can implement an I/O subsystem infrastructure capable of meeting the computational speeds of the advanced computing systems at the petascale and exascale level. We propose a system architecture that leverages the Partitioned Global Address Space (PGAS) model of computing to maintain an in-memory data-store for the Processing Unit (PU) of the upgraded electronics of the Tile Calorimeter which is proposed to be used as a high throughput general purpose co-processor to the sROD of the upgraded Tile Calorimeter. The physical memory of the PUs are aggregated into a large global logical address space using RDMA- capable interconnects such as PCI- Express to enhance data processing throughput.
Darwin's bee-trap: The kinetics of Catasetum, a new world orchid.
Nicholson, Charles C; Bales, James W; Palmer-Fortune, Joyce E; Nicholson, Robert G
2008-01-01
The orchid genera Catasetum employs a hair-trigger activated, pollen release mechanism, which forcibly attaches pollen sacs onto foraging insects in the New World tropics. This remarkable adaptation was studied extensively by Charles Darwin and he termed this rapid response "sensitiveness." Using high speed video cameras with a frame speed of 1000 fps, this rapid release was filmed and from the subsequent footage, velocity, speed, acceleration, force and kinetic energy were computed.
NASA Technical Reports Server (NTRS)
Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.
1991-01-01
A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.
Applied Computational Fluid Dynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1994-01-01
The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.
NASA Technical Reports Server (NTRS)
Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)
1975-01-01
The author has identified the following significant results. It was found that the high speed man machine interaction capability is a distinct advantage of the image 100; however, the small size of the digital computer in the system is a definite limitation. The system can be highly useful in an analysis mode in which it complements a large general purpose computer. The image 100 was found to be extremely valuable in the analysis of aircraft MSS data where the spatial resolution begins to approach photographic quality and the analyst can exercise interpretation judgements and readily interact with the machine.
Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows
NASA Technical Reports Server (NTRS)
Baker, A. J.; Freels, J. D.
1989-01-01
A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.
High speed cylindrical roller bearing analysis, SKF computer program CYBEAN. Volume 2: User's manual
NASA Technical Reports Server (NTRS)
Kleckner, R. J.; Pirvics, J.
1978-01-01
The CYBEAN (Cylindrical Bearing Analysis) was created to detail radially loaded, aligned and misaligned cylindrical roller bearing performance under a variety of operating conditions. Emphasis was placed on detailing the effects of high speed, preload and system thermal coupling. Roller tilt, skew, radial, circumferential and axial displacement as well as flange contact were considered. Variable housing and flexible out-of-round outer ring geometries, and both steady state and time transient temperature calculations were enabled. The complete range of elastohydrodynamic contact considerations, employing full and partial film conditions were treated in the computation of raceway and flange contacts. Input and output architectures containing guidelines for use and a sample execution are detailed.
SANs and Large Scale Data Migration at the NASA Center for Computational Sciences
NASA Technical Reports Server (NTRS)
Salmon, Ellen M.
2004-01-01
Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.
High-Speed On-Board Data Processing for Science Instruments
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace
2014-01-01
A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.
NASA Technical Reports Server (NTRS)
Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.
1980-01-01
The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.
Generalization of low pressure, gas-liquid, metastable sound speed to high pressures
NASA Technical Reports Server (NTRS)
Bursik, J. W.; Hall, R. M.
1981-01-01
A theory is developed for isentropic metastable sound propagation in high pressure gas-liquid mixtures. Without simplification, it also correctly predicts the minimum speed for low pressure air-water measurements where other authors are forced to postulate isothermal propagation. This is accomplished by a mixture heat capacity ratio which automatically adjusts from its single phase values to approximately the isothermal value of unity needed for the minimum speed. Computations are made for the pure components parahydrogen and nitrogen, with emphasis on the latter. With simplifying assumptions, the theory reduces to a well known approximate formula limited to low pressure.
Computational Challenges of Viscous Incompressible Flows
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin; Kim, Chang Sung
2004-01-01
Over the past thirty years, numerical methods and simulation tools for incompressible flows have been advanced as a subset of the computational fluid dynamics (CFD) discipline. Although incompressible flows are encountered in many areas of engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to the rather stringent requirements for predicting aerodynamic performance characteristics of flight vehicles, while flow devices involving low-speed or incompressible flow could be reasonably well designed without resorting to accurate numerical simulations. As flow devices are required to be more sophisticated and highly efficient CFD took become increasingly important in fluid engineering for incompressible and low-speed flow. This paper reviews some of the successes made possible by advances in computational technologies during the same period, and discusses some of the current challenges faced in computing incompressible flows.
Capability of GPGPU for Faster Thermal Analysis Used in Data Assimilation
NASA Astrophysics Data System (ADS)
Takaki, Ryoji; Akita, Takeshi; Shima, Eiji
A thermal mathematical model plays an important role in operations on orbit as well as spacecraft thermal designs. The thermal mathematical model has some uncertain thermal characteristic parameters, such as thermal contact resistances between components, effective emittances of multilayer insulation (MLI) blankets, discouraging make up efficiency and accuracy of the model. A particle filter which is one of successive data assimilation methods has been applied to construct spacecraft thermal mathematical models. This method conducts a lot of ensemble computations, which require large computational power. Recently, General Purpose computing in Graphics Processing Unit (GPGPU) has been attracted attention in high performance computing. Therefore GPGPU is applied to increase the computational speed of thermal analysis used in the particle filter. This paper shows the speed-up results by using GPGPU as well as the application method of GPGPU.
NASA Astrophysics Data System (ADS)
Ford, Eric B.; Dindar, Saleh; Peters, Jorg
2015-08-01
The realism of astrophysical simulations and statistical analyses of astronomical data are set by the available computational resources. Thus, astronomers and astrophysicists are constantly pushing the limits of computational capabilities. For decades, astronomers benefited from massive improvements in computational power that were driven primarily by increasing clock speeds and required relatively little attention to details of the computational hardware. For nearly a decade, increases in computational capabilities have come primarily from increasing the degree of parallelism, rather than increasing clock speeds. Further increases in computational capabilities will likely be led by many-core architectures such as Graphical Processing Units (GPUs) and Intel Xeon Phi. Successfully harnessing these new architectures, requires significantly more understanding of the hardware architecture, cache hierarchy, compiler capabilities and network network characteristics.I will provide an astronomer's overview of the opportunities and challenges provided by modern many-core architectures and elastic cloud computing. The primary goal is to help an astronomical audience understand what types of problems are likely to yield more than order of magnitude speed-ups and which problems are unlikely to parallelize sufficiently efficiently to be worth the development time and/or costs.I will draw on my experience leading a team in developing the Swarm-NG library for parallel integration of large ensembles of small n-body systems on GPUs, as well as several smaller software projects. I will share lessons learned from collaborating with computer scientists, including both technical and soft skills. Finally, I will discuss the challenges of training the next generation of astronomers to be proficient in this new era of high-performance computing, drawing on experience teaching a graduate class on High-Performance Scientific Computing for Astrophysics and organizing a 2014 advanced summer school on Bayesian Computing for Astronomical Data Analysis with support of the Penn State Center for Astrostatistics and Institute for CyberScience.
A computer-vision-based rotating speed estimation method for motor bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Wang, Xiaoxian; Guo, Jie; Lu, Siliang; Shen, Changqing; He, Qingbo
2017-06-01
Diagnosis of motor bearing faults under variable speed is a problem. In this study, a new computer-vision-based order tracking method is proposed to address this problem. First, a video recorded by a high-speed camera is analyzed with the speeded-up robust feature extraction and matching algorithm to obtain the instantaneous rotating speed (IRS) of the motor. Subsequently, an audio signal recorded by a microphone is equi-angle resampled for order tracking in accordance with the IRS curve, through which the frequency-domain signal is transferred to an angular-domain one. The envelope order spectrum is then calculated to determine the fault characteristic order, and finally the bearing fault pattern is determined. The effectiveness and robustness of the proposed method are verified with two brushless direct-current motor test rigs, in which two defective bearings and a healthy bearing are tested separately. This study provides a new noninvasive measurement approach that simultaneously avoids the installation of a tachometer and overcomes the disadvantages of tacholess order tracking methods for motor bearing fault diagnosis under variable speed.
NASA Technical Reports Server (NTRS)
Schilling, D. L.; Oh, S. J.; Thau, F.
1975-01-01
Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.
Software Accelerates Computing Time for Complex Math
NASA Technical Reports Server (NTRS)
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
Linear static structural and vibration analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.
1993-01-01
Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.
Speed and path control for conflict-free flight in high air traffic demand in terminal airspace
NASA Astrophysics Data System (ADS)
Rezaei, Ali
To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new waypoints in the airspace. As a byproduct, instead of minimal path modification, one can use the aircraft arrival time schedule to generate the sequence in which the aircraft reach their destinations.
NASA Astrophysics Data System (ADS)
Hao, Qiushi; Shen, Yi; Wang, Yan; Zhang, Xin
2018-01-01
Nondestructive test (NDT) of rails has been carried out intermittently in traditional approaches, which highly restricts the detection efficiency under rapid development of high speed railway nowadays. It is necessary to put forward a dynamic rail defect detection method for rail health monitoring. Acoustic emission (AE) as a practical real-time detection technology takes advantage of dynamic AE signal emitted from plastic deformation of material. Detection capacities of AE on rail defects have been verified due to its sensitivity and dynamic merits. Whereas the application under normal train service circumstance has been impeded by synchronous background noises, which are directly linked to the wheel speed. In this paper, surveys on a wheel-rail rolling rig are performed to investigate defect AE signals with varying speed. A dynamic denoising method based on Kalman filter is proposed and its detection effectiveness and flexibility are demonstrated by theory and computational results. Moreover, after comparative analysis of modelling precision at different speeds, it is predicted that the method is also applicable for high speed condition beyond experiments.
Integrating Computer Architectures into the Design of High-Performance Controllers
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.; Leyland, Jane A.; Warmbrodt, William
1986-01-01
Modern control systems must typically perform real-time identification and control, as well as coordinate a host of other activities related to user interaction, on-line graphics, and file management. This paper discusses five global design considerations that are useful to integrate array processor, multimicroprocessor, and host computer system architecture into versatile, high-speed controllers. Such controllers are capable of very high control throughput, and can maintain constant interaction with the non-real-time or user environment. As an application example, the architecture of a high-speed, closed-loop controller used to actively control helicopter vibration will be briefly discussed. Although this system has been designed for use as the controller for real-time rotorcraft dynamics and control studies in a wind-tunnel environment, the control architecture can generally be applied to a wide range of automatic control applications.
Parallel fuzzy connected image segmentation on GPU
Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.
2011-01-01
Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037
Parallel fuzzy connected image segmentation on GPU.
Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W
2011-07-01
Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.
Kok, H P; de Greef, M; Bel, A; Crezee, J
2009-08-01
In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.
NASA Technical Reports Server (NTRS)
Jansen, Mark; Montague, Gerald; Provenza, Andrew; Palazzolo, Alan
2004-01-01
Closed loop operation of a single, high temperature magnetic radial bearing to 30,000 RPM (2.25 million DN) and 540 C (1000 F) is discussed. Also, high temperature, fault tolerant operation for the three axis system is examined. A novel, hydrostatic backup bearing system was employed to attain high speed, high temperature, lubrication free support of the entire rotor system. The hydrostatic bearings were made of a high lubricity material and acted as journal-type backup bearings. New, high temperature displacement sensors were successfully employed to monitor shaft position throughout the entire temperature range and are described in this paper. Control of the system was accomplished through a stand alone, high speed computer controller and it was used to run both the fault-tolerant PID and active vibration control algorithms.
Advanced Multifunctional Materials for High Speed Combatant Hulls
2015-11-25
Combatant Hulls 5a. CONTRACT NUMBER 5b. GRANT NUMBER N00014-14-1-0269 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Mark S. Mirotznik 5d. PROJECT...High Speed Combatant Hulls ’ PI Information: Mark S. Mirotznik, Associate Professor Tel: (302) 831 -4241 Department of Electrical and Computer... HULLS FINAL TECHNICAL REPORT 1.0 Abstract In this ONR funded project investigators at the University of Delaware’s Department of Electrical
1985-12-01
Office of Scientific Research , and Air Force Space Division are sponsoring research for the development of a high speed DFT processor. This DFT...to the arithmetic circuitry through a master/slave 11-15 %v OPR ONESHOT OUTPUT OUTPUT .., ~ INITIALIZATION COLUMN’ 00 N DONE CUTRPLANE PLAtNE Figure...Since the TSP is an NP-complete problem, many mathematicians, operations researchers , computer scientists and the like have proposed heuristic
MQW Optical Feedback Modulators And Phase Shifters
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
1995-01-01
Laser diodes equipped with proposed multiple-quantum-well (MQW) optical feedback modulators prove useful in variety of analog and digital optical-communication applications, including fiber-optic signal-distribution networks and high-speed, low-crosstalk interconnections among super computers or very-high-speed integrated circuits. Development exploits accompanying electro-optical aspect of QCSE - variation in index of refraction with applied electric field. Also exploits sensitivity of laser diodes to optical feedback. Approach is reverse of prior approach.
NASA Technical Reports Server (NTRS)
Strong, J. P., III
1973-01-01
Tse computers have the potential of operating four or five orders of magnitude faster than present digital computers. The computers of the new design use binary images as their basic computational entity. The word 'tse' is the transliteration of the Chinese word for 'pictograph character.' Tse computers are large collections of devices that perform logical operations on binary images. The operations on binary images are to be performed over the entire image simultaneously.
NASA Technical Reports Server (NTRS)
Klopfer, Goetz H.
1993-01-01
The work performed during the past year on this cooperative agreement covered two major areas and two lesser ones. The two major items included further development and validation of the Compressible Navier-Stokes Finite Volume (CNSFV) code and providing computational support for the Laminar Flow Supersonic Wind Tunnel (LFSWT). The two lesser items involve a Navier-Stokes simulation of an oscillating control surface at transonic speeds and improving the basic algorithm used in the CNSFV code for faster convergence rates and more robustness. The work done in all four areas is in support of the High Speed Research Program at NASA Ames Research Center.
NASA Technical Reports Server (NTRS)
Leonard, A.
1980-01-01
Three recent simulations of tubulent shear flow bounded by a wall using the Illiac computer are reported. These are: (1) vibrating-ribbon experiments; (2) study of the evolution of a spot-like disturbance in a laminar boundary layer; and (3) investigation of turbulent channel flow. A number of persistent flow structures were observed, including streamwise and vertical vorticity distributions near the wall, low-speed and high-speed streaks, and local regions of intense vertical velocity. The role of these structures in, for example, the growth or maintenance of turbulence is discussed. The problem of representing the large range of turbulent scales in a computer simulation is also discussed.
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
Evaluation of a continuous-rotation, high-speed scanning protocol for micro-computed tomography.
Kerl, Hans Ulrich; Isaza, Cristina T; Boll, Hanne; Schambach, Sebastian J; Nolte, Ingo S; Groden, Christoph; Brockmann, Marc A
2011-01-01
Micro-computed tomography is used frequently in preclinical in vivo research. Limiting factors are radiation dose and long scan times. The purpose of the study was to compare a standard step-and-shoot to a continuous-rotation, high-speed scanning protocol. Micro-computed tomography of a lead grid phantom and a rat femur was performed using a step-and-shoot and a continuous-rotation protocol. Detail discriminability and image quality were assessed by 3 radiologists. The signal-to-noise ratio and the modulation transfer function were calculated, and volumetric analyses of the femur were performed. The radiation dose of the scan protocols was measured using thermoluminescence dosimeters. The 40-second continuous-rotation protocol allowed a detail discriminability comparable to the step-and-shoot protocol at significantly lower radiation doses. No marked differences in volumetric or qualitative analyses were observed. Continuous-rotation micro-computed tomography significantly reduces scanning time and radiation dose without relevantly reducing image quality compared with a normal step-and-shoot protocol.
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Mcewan, S. D.; Spry, A. J.
1985-01-01
Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1994-01-01
A full Navier-Stokes analysis was performed to evaluate the performance of the subsonic diffuser of a NASA Lewis Research Center 70/30 mixed-compression bifurcated supersonic inlet for high speed civil transport application. The PARC3D code was used in the present study. The computations were also performed when approximately 2.5 percent of the engine mass flow was allowed to bypass through the engine bypass doors. The computational results were compared with the available experimental data which consisted of detailed Mach number and total pressure distribution along the entire length of the subsonic diffuser. The total pressure recovery, flow distortion, and crossflow velocity at the engine face were also calculated. The computed surface ramp and cowl pressure distributions were compared with experiments. Overall, the computational results compared well with experimental data. The present CFD analysis demonstrated that the bypass flow improves the total pressure recovery and lessens flow distortions at the engine face.
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Development of a high-specific-speed centrifugal compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodgers, C.
1997-07-01
This paper describes the development of a subscale single-stage centrifugal compressor with a dimensionless specific speed (Ns) of 1.8, originally designed for full-size application as a high volume flow, low pressure ratio, gas booster compressor. The specific stage is noteworthy in that it provides a benchmark representing the performance potential of very high-specific-speed compressors, of which limited information is found in the open literature. Stage and component test performance characteristics are presented together with traverse results at the impeller exit. Traverse test results were compared with recent CFD computational predictions for an exploratory analytical calibration of a very high-specific-speed impellermore » geometry. The tested subscale (0.583) compressor essentially satisfied design performance expectations with an overall stage efficiency of 74% including, excessive exit casing losses. It was estimated that stage efficiency could be increased to 81% with exit casing losses halved.« less
The evolution of methods for noise prediction of high speed rotors and propellers in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.
1986-01-01
Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.
Reynolds Number Effects on a Supersonic Transport at Transonic Conditions
NASA Technical Reports Server (NTRS)
Wahls, R. N.; Owens, L. R.; Rivers, S. M. B.
2001-01-01
A High Speed Civil Transport configuration was tested in the National Transonic Facility at the NASA Langley Research Center as part of NASA's High Speed Research Program. The primary purposes of the tests were to assess Reynolds number scale effects and the high Reynolds number aerodynamic characteristics of a realistic, second generation supersonic transport while providing data for the assessment of computational methods. The tests included longitudinal and lateral/directional studies at low speed high-lift and transonic conditions across a range of Reynolds numbers from that available in conventional wind tunnels to near flight conditions. Results are presented which focus on both the Reynolds number and static aeroelastic sensitivities of longitudinal characteristics at Mach 0.90 for a configuration without an empennage.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, Cyrus K.; Steinberger, Craig J.
1990-01-01
This research is involved with the implementation of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program to extend the present capabilities of this method was initiated for the treatment of chemically reacting flows. In the DNS efforts, the focus is on detailed investigations of the effects of compressibility, heat release, and non-equilibrium kinetics modelings in high speed reacting flows. Emphasis was on the simulations of simple flows, namely homogeneous compressible flows, and temporally developing high speed mixing layers.
Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains
Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping
2017-01-01
This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097
Computational Analysis of a Wing Designed for the X-57 Distributed Electric Propulsion Aircraft
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Viken, Jeffrey K.; Viken, Sally A.; Carter, Melissa B.; Wiese, Michael R.; Farr, Norma L.
2017-01-01
A computational study of the wing for the distributed electric propulsion X-57 Maxwell airplane configuration at cruise and takeoff/landing conditions was completed. Two unstructured-mesh, Navier-Stokes computational fluid dynamics methods, FUN3D and USM3D, were used to predict the wing performance. The goal of the X-57 wing and distributed electric propulsion system design was to meet or exceed the required lift coefficient 3.95 for a stall speed of 58 knots, with a cruise speed of 150 knots at an altitude of 8,000 ft. The X-57 Maxwell airplane was designed with a small, high aspect ratio cruise wing that was designed for a high cruise lift coefficient (0.75) at angle of attack of 0deg. The cruise propulsors at the wingtip rotate counter to the wingtip vortex and reduce induced drag by 7.5 percent at an angle of attack of 0.6deg. The unblown maximum lift coefficient of the high-lift wing (with the 30deg flap setting) is 2.439. The stall speed goal performance metric was confirmed with a blown wing computed effective lift coefficient of 4.202. The lift augmentation from the high-lift, distributed electric propulsion system is 1.7. The predicted cruise wing drag coefficient of 0.02191 is 0.00076 above the drag allotted for the wing in the original estimate. However, the predicted drag overage for the wing would only use 10.1 percent of the original estimated drag margin, which is 0.00749.
Imaging Performance of Quantitative Transmission Ultrasound
Lenox, Mark W.; Wiskin, James; Lewis, Matthew A.; Darrouzet, Stephen; Borup, David; Hsieh, Scott
2015-01-01
Quantitative Transmission Ultrasound (QTUS) is a tomographic transmission ultrasound modality that is capable of generating 3D speed-of-sound maps of objects in the field of view. It performs this measurement by propagating a plane wave through the medium from a transmitter on one side of a water tank to a high resolution receiver on the opposite side. This information is then used via inverse scattering to compute a speed map. In addition, the presence of reflection transducers allows the creation of a high resolution, spatially compounded reflection map that is natively coregistered to the speed map. A prototype QTUS system was evaluated for measurement and geometric accuracy as well as for the ability to correctly determine speed of sound. PMID:26604918
Langley Symposium on Aerodynamics, volume 1
NASA Technical Reports Server (NTRS)
Stack, Sharon H. (Compiler)
1986-01-01
The purpose of this work was to present current work and results of the Langley Aeronautics Directorate covering the areas of computational fluid dynamics, viscous flows, airfoil aerodynamics, propulsion integration, test techniques, and low-speed, high-speed, and transonic aerodynamics. The following sessions are included in this volume: theoretical aerodynamics, test techniques, fluid physics, and viscous drag reduction.
Information Systems at Enterprise. Design of Secure Network of Enterprise
NASA Astrophysics Data System (ADS)
Saigushev, N. Y.; Mikhailova, U. V.; Vedeneeva, O. A.; Tsaran, A. A.
2018-05-01
No enterprise and company can do without designing its own corporate network in today's information society. It accelerates and facilitates the work of employees at any level, but contains a big threat to confidential information of the company. In addition to the data theft attackers, there are plenty of information threats posed by modern malware effects. In this regard, the computational security of corporate networks is an important component of modern information technologies of computer security for any enterprise. This article says about the design of the protected corporate network of the enterprise that provides the computers on the network access to the Internet, as well interoperability with the branch. The access speed to the Internet at a high level is provided through the use of high-speed access channels and load balancing between devices. The security of the designed network is performed through the use of VLAN technology as well as access lists and AAA server.
NASA Astrophysics Data System (ADS)
Chen, Dongyue; Lin, Jianhui; Li, Yanping
2018-06-01
Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.
A Real Time Controller For Applications In Smart Structures
NASA Astrophysics Data System (ADS)
Ahrens, Christian P.; Claus, Richard O.
1990-02-01
Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.
Ultrahigh-speed X-ray imaging of hypervelocity projectiles
NASA Astrophysics Data System (ADS)
Miller, Stuart; Singh, Bipin; Cool, Steven; Entine, Gerald; Campbell, Larry; Bishel, Ron; Rushing, Rick; Nagarkar, Vivek V.
2011-08-01
High-speed X-ray imaging is an extremely important modality for healthcare, industrial, military and research applications such as medical computed tomography, non-destructive testing, imaging in-flight projectiles, characterizing exploding ordnance, and analyzing ballistic impacts. We report on the development of a modular, ultrahigh-speed, high-resolution digital X-ray imaging system with large active imaging area and microsecond time resolution, capable of acquiring at a rate of up to 150,000 frames per second. The system is based on a high-resolution, high-efficiency, and fast-decay scintillator screen optically coupled to an ultra-fast image-intensified CCD camera designed for ballistic impact studies and hypervelocity projectile imaging. A specially designed multi-anode, high-fluence X-ray source with 50 ns pulse duration provides a sequence of blur-free images of hypervelocity projectiles traveling at speeds exceeding 8 km/s (18,000 miles/h). This paper will discuss the design, performance, and high frame rate imaging capability of the system.
Experimental and Computational Sonic Boom Assessment of Lockheed-Martin N+2 Low Boom Models
NASA Technical Reports Server (NTRS)
Cliff, Susan E.; Durston, Donald A.; Elmiligui, Alaa A.; Walker, Eric L.; Carter, Melissa B.
2015-01-01
Flight at speeds greater than the speed of sound is not permitted over land, primarily because of the noise and structural damage caused by sonic boom pressure waves of supersonic aircraft. Mitigation of sonic boom is a key focus area of the High Speed Project under NASA's Fundamental Aeronautics Program. The project is focusing on technologies to enable future civilian aircraft to fly efficiently with reduced sonic boom, engine and aircraft noise, and emissions. A major objective of the project is to improve both computational and experimental capabilities for design of low-boom, high-efficiency aircraft. NASA and industry partners are developing improved wind tunnel testing techniques and new pressure instrumentation to measure the weak sonic boom pressure signatures of modern vehicle concepts. In parallel, computational methods are being developed to provide rapid design and analysis of supersonic aircraft with improved meshing techniques that provide efficient, robust, and accurate on- and off-body pressures at several body lengths from vehicles with very low sonic boom overpressures. The maturity of these critical parallel efforts is necessary before low-boom flight can be demonstrated and commercial supersonic flight can be realized.
The effect of interference on delta modulation encoded video signals
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1979-01-01
The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.
Design and Operating Characteristics of High-Speed, Small-Bore, Angular-Contact Ball Bearings
NASA Technical Reports Server (NTRS)
Pinel, Stanley I.; Signer, Hans R.; Zaretsky, Erwin V.
1998-01-01
The computer program SHABERTH was used to analyze 35-mm-bore, angular-contact ball bearings designed and manufactured for high-speed turbomachinery applications. Parametric tests of the bearings were conducted on a high-speed, high-temperature bearing tester and were compared with the computer predictions. Four bearing and cage designs were studied. The bearings were lubricated either by jet lubrication or through the split inner ring with and without outer-ring cooling. The predicted bearing life decreased with increasing speed because of increased operating contact stresses caused by changes in contact angle and centrifugal load. For thrust loads only, the difference in calculated life for the 24 deg. and 30 deg. contact-angle bearings was insignificant. However, for combined loading, the 24 deg. contact-angle bearing gave longer life. For split-inner-ring bearings, optimal operating conditions were obtained with a 24 deg. contact angle and an inner-ring, land-guided cage, using outer-ring cooling in conjunction with low lubricant flow rates. Lower temperature and power losses were obtained with a single-outer-ring, land-guided cage for the 24 deg. contact-angle bearing having a relieved inner ring and partially relieved outer ring. Inner-ring temperatures were independent of lubrication mode and cage design. In comparison with measured values, reasonably good engineering correlation was obtained using the computer program SHABERTH for predicted bearing power loss and for inner- and outer-ring temperatures. The Parker formula for XCAV (used in SHABERTH, a measure of oil volume in the bearing cavity) may need to be refined to reflect bearing lubrication mode, cage design, and location of cage-controlling land.
User Interface Developed for Controls/CFD Interdisciplinary Research
NASA Technical Reports Server (NTRS)
1996-01-01
The NASA Lewis Research Center, in conjunction with the University of Akron, is developing analytical methods and software tools to create a cross-discipline "bridge" between controls and computational fluid dynamics (CFD) technologies. Traditionally, the controls analyst has used simulations based on large lumping techniques to generate low-order linear models convenient for designing propulsion system controls. For complex, high-speed vehicles such as the High Speed Civil Transport (HSCT), simulations based on CFD methods are required to capture the relevant flow physics. The use of CFD should also help reduce the development time and costs associated with experimentally tuning the control system. The initial application for this research is the High Speed Civil Transport inlet control problem. A major aspect of this research is the development of a controls/CFD interface for non-CFD experts, to facilitate the interactive operation of CFD simulations and the extraction of reduced-order, time-accurate models from CFD results. A distributed computing approach for implementing the interface is being explored. Software being developed as part of the Integrated CFD and Experiments (ICE) project provides the basis for the operating environment, including run-time displays and information (data base) management. Message-passing software is used to communicate between the ICE system and the CFD simulation, which can reside on distributed, parallel computing systems. Initially, the one-dimensional Large-Perturbation Inlet (LAPIN) code is being used to simulate a High Speed Civil Transport type inlet. LAPIN can model real supersonic inlet features, including bleeds, bypasses, and variable geometry, such as translating or variable-ramp-angle centerbodies. Work is in progress to use parallel versions of the multidimensional NPARC code.
NASA Technical Reports Server (NTRS)
Kemeny, Sabrina E.
1994-01-01
Electronic and optoelectronic hardware implementations of highly parallel computing architectures address several ill-defined and/or computation-intensive problems not easily solved by conventional computing techniques. The concurrent processing architectures developed are derived from a variety of advanced computing paradigms including neural network models, fuzzy logic, and cellular automata. Hardware implementation technologies range from state-of-the-art digital/analog custom-VLSI to advanced optoelectronic devices such as computer-generated holograms and e-beam fabricated Dammann gratings. JPL's concurrent processing devices group has developed a broad technology base in hardware implementable parallel algorithms, low-power and high-speed VLSI designs and building block VLSI chips, leading to application-specific high-performance embeddable processors. Application areas include high throughput map-data classification using feedforward neural networks, terrain based tactical movement planner using cellular automata, resource optimization (weapon-target assignment) using a multidimensional feedback network with lateral inhibition, and classification of rocks using an inner-product scheme on thematic mapper data. In addition to addressing specific functional needs of DOD and NASA, the JPL-developed concurrent processing device technology is also being customized for a variety of commercial applications (in collaboration with industrial partners), and is being transferred to U.S. industries. This viewgraph p resentation focuses on two application-specific processors which solve the computation intensive tasks of resource allocation (weapon-target assignment) and terrain based tactical movement planning using two extremely different topologies. Resource allocation is implemented as an asynchronous analog competitive assignment architecture inspired by the Hopfield network. Hardware realization leads to a two to four order of magnitude speed-up over conventional techniques and enables multiple assignments, (many to many), not achievable with standard statistical approaches. Tactical movement planning (finding the best path from A to B) is accomplished with a digital two-dimensional concurrent processor array. By exploiting the natural parallel decomposition of the problem in silicon, a four order of magnitude speed-up over optimized software approaches has been demonstrated.
NASA Astrophysics Data System (ADS)
Benjanirat, Sarun
Next generation horizontal-axis wind turbines (HAWTs) will operate at very high wind speeds. Existing engineering approaches for modeling the flow phenomena are based on blade element theory, and cannot adequately account for 3-D separated, unsteady flow effects. Therefore, researchers around the world are beginning to model these flows using first principles-based computational fluid dynamics (CFD) approaches. In this study, an existing first principles-based Navier-Stokes approach is being enhanced to model HAWTs at high wind speeds. The enhancements include improved grid topology, implicit time-marching algorithms, and advanced turbulence models. The advanced turbulence models include the Spalart-Allmaras one-equation model, k-epsilon, k-o and Shear Stress Transport (k-o-SST) models. These models are also integrated with detached eddy simulation (DES) models. Results are presented for a range of wind speeds, for a configuration termed National Renewable Energy Laboratory Phase VI rotor, tested at NASA Ames Research Center. Grid sensitivity studies are also presented. Additionally, effects of existing transition models on the predictions are assessed. Data presented include power/torque production, radial distribution of normal and tangential pressure forces, root bending moments, and surface pressure fields. Good agreement was obtained between the predictions and experiments for most of the conditions, particularly with the Spalart-Allmaras-DES model.
Advanced Architectures for Astrophysical Supercomputing
NASA Astrophysics Data System (ADS)
Barsdell, B. R.; Barnes, D. G.; Fluke, C. J.
2010-12-01
Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of O(100×) in general-purpose computation - performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.
ERIC Educational Resources Information Center
Ybarra, Gary A.; Collins, Leslie M.; Huettel, Lisa G.; Brown, April S.; Coonley, Kip D.; Massoud, Hisham Z.; Board, John A.; Cummer, Steven A.; Choudhury, Romit Roy; Gustafson, Michael R.; Jokerst, Nan M.; Brooke, Martin A.; Willett, Rebecca M.; Kim, Jungsang; Absher, Martha S.
2011-01-01
The field of electrical and computer engineering has evolved significantly in the past two decades. This evolution has broadened the field of ECE, and subfields have seen deep penetration into very specialized areas. Remarkable devices and systems arising from innovative processes, exotic materials, high speed computer simulations, and complex…
Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows
NASA Astrophysics Data System (ADS)
Mitroglou, N.; Lorenzi, M.; Santini, M.; Gavaises, M.
2016-11-01
The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions.
Computing Temperatures in Optically Thick Protoplanetary Disks
NASA Technical Reports Server (NTRS)
Capuder, Lawrence F.. Jr.
2011-01-01
We worked with a Monte Carlo radiative transfer code to simulate the transfer of energy through protoplanetary disks, where planet formation occurs. The code tracks photons from the star into the disk, through scattering, absorption and re-emission, until they escape to infinity. High optical depths in the disk interior dominate the computation time because it takes the photon packet many interactions to get out of the region. High optical depths also receive few photons and therefore do not have well-estimated temperatures. We applied a modified random walk (MRW) approximation for treating high optical depths and to speed up the Monte Carlo calculations. The MRW is implemented by calculating the average number of interactions the photon packet will undergo in diffusing within a single cell of the spatial grid and then updating the packet position, packet frequencies, and local radiation absorption rate appropriately. The MRW approximation was then tested for accuracy and speed compared to the original code. We determined that MRW provides accurate answers to Monte Carlo Radiative transfer simulations. The speed gained from using MRW is shown to be proportional to the disk mass.
CrossTalk: The Journal of Defense Software Engineering. Volume 18, Number 6
2005-06-01
Bollinger The MITRE Corporation1 Progress brings new dangers: Powerful home computers, inexpensive high-speed Internet access, telecommuting , and software...been using for years. Sure enough, GIANT was able to finish removing the hard - core spyware. At some point, my Internet security package had been... The sad truth is that if you do nothing more than attach a Windows PC to the Internet over a high- speed line, it will be subjected to the first
A finite element approach for solution of the 3D Euler equations
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Ramakrishnan, R.; Dechaumphai, P.
1986-01-01
Prediction of thermal deformations and stresses has prime importance in the design of the next generation of high speed flight vehicles. Aerothermal load computations for complex three-dimensional shapes necessitate development of procedures to solve the full Navier-Stokes equations. This paper details the development of a three-dimensional inviscid flow approach which can be extended for three-dimensional viscous flows. A finite element formulation, based on a Taylor series expansion in time, is employed to solve the compressible Euler equations. Model generation and results display are done using a commercially available program, PATRAN, and vectorizing strategies are incorporated to ensure computational efficiency. Sample problems are presented to demonstrate the validity of the approach for analyzing high speed compressible flows.
The NASA High Speed ASE Project: Computational Analyses of a Low-Boom Supersonic Configuration
NASA Technical Reports Server (NTRS)
Silva, Walter A.; DeLaGarza, Antonio; Zink, Scott; Bounajem, Elias G.; Johnson, Christopher; Buonanno, Michael; Sanetrik, Mark D.; Yoo, Seung Y.; Kopasakis, George; Christhilf, David M.;
2014-01-01
A summary of NASA's High Speed Aeroservoelasticity (ASE) project is provided with a focus on a low-boom supersonic configuration developed by Lockheed-Martin and referred to as the N+2 configuration. The summary includes details of the computational models developed to date including a linear finite element model (FEM), linear unsteady aerodynamic models, structured and unstructured CFD grids, and discussion of the FEM development including sizing and structural constraints applied to the N+2 configuration. Linear results obtained to date include linear mode shapes and linear flutter boundaries. In addition to the tasks associated with the N+2 configuration, a summary of the work involving the development of AeroPropulsoServoElasticity (APSE) models is also discussed.
Computation of turbulent high speed mixing layers using a two-equation turbulence model
NASA Technical Reports Server (NTRS)
Narayan, J. R.; Sekar, B.
1991-01-01
A two-equation turbulence model was extended to be applicable for compressible flows. A compressibility correction based on modelling the dilational terms in the Reynolds stress equations were included in the model. The model is used in conjunction with the SPARK code for the computation of high speed mixing layers. The observed trend of decreasing growth rate with increasing convective Mach number in compressible mixing layers is well predicted by the model. The predictions agree well with the experimental data and the results from a compressible Reynolds stress model. The present model appears to be well suited for the study of compressible free shear flows. Preliminary results obtained for the reacting mixing layers are included.
Computation of UH-60A Airloads Using CFD/CSD Coupling on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Lee-Rausch, Elizabeth M.
2011-01-01
An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids is used to compute the rotor airloads on the UH-60A helicopter at high-speed and high thrust conditions. The flow solver is coupled to a rotorcraft comprehensive code in order to account for trim and aeroelastic deflections. Simulations are performed both with and without the fuselage, and the effects of grid resolution, temporal resolution and turbulence model are examined. Computed airloads are compared to flight data.
Department of Defense In-House RDT and E Activities: Management Analysis Report for Fiscal Year 1993
1994-11-01
A worldwide unique lab because it houses a high - speed modeling and simulation system, a prototype...E Division, San Diego, CA: High Performance Computing Laboratory providing a wide range of advanced computer systems for the scientific investigation...Machines CM-200 and a 256-node Thinking Machines CM-S. The CM-5 is in a very large memory, ( high performance 32 Gbytes, >4 0 OFlop) coafiguration,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpov, A. S.
2013-01-15
A computer procedure for simulating magnetization-controlled dc shunt reactors is described, which enables the electromagnetic transients in electric power systems to be calculated. It is shown that, by taking technically simple measures in the control system, one can obtain high-speed reactors sufficient for many purposes, and dispense with the use of high-power devices for compensating higher harmonic components.
High-Speed Systolic Array Testbed.
1987-10-01
applications since the concept was introduced by H.T. Kung In 1978. This highly parallel architecture of nearet neighbor data communciation and...must be addressed. For instance, should bit-serial or bit parallei computation be utilized. Does the dynamic range of the candidate applications or...numericai stability of the algorithms used require computations In fixed point and Integer format or the architecturally more complex and slower floating
The 512-channel correlator controller
NASA Technical Reports Server (NTRS)
Brokl, S. S.
1976-01-01
A high-speed correlator for radio and radar observations was developed and a controller was designed so that the correlator could run automatically without computer intervention. The correlator controller assumes the role of bus master and keeps track of data and properly interrupts the computer at the end of the observation.
THE CLIMATIC AND HYDROLOGIC FACTORS AFFECTING THE REDISTRIBUTION OF SR-90
leaching solution present and the chemical and cation exchange properties of the soil solution ; a mathematical model of movement was established...manual for using high speed computers to compute the factors of the daily water balance was prepared; the influence of the soil solution in
NASA CST aids U.S. industry. [computational structures technology
NASA Technical Reports Server (NTRS)
Housner, Jerry M.; Pinson, Larry D.
1993-01-01
The effect of NASA's computational structures Technology (CST) research on aerospace vehicle design and operation is discussed. The application of this research to proposed version of a high-speed civil transport, to composite structures in aerospace, to the study of crack growth, and to resolving field problems is addressed.
NASA Technical Reports Server (NTRS)
Fleming, David P.; Poplawski, J. V.
2002-01-01
Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.
Visser, Marco D.; McMahon, Sean M.; Merow, Cory; Dixon, Philip M.; Record, Sydne; Jongejans, Eelke
2015-01-01
Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1–S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research. PMID:25811842
Impact of Increased Football Field Width on Player High-Speed Collision Rate.
Joseph, Jacob R; Khalsa, Siri S; Smith, Brandon W; Park, Paul
2017-07-01
High-acceleration head impact is a known risk for mild traumatic brain injury (mTBI) based on studies using helmet accelerometry. In football, offensive and defensive players are at higher risk of mTBI due to increased speed of play. Other collision sport studies suggest that increased playing surface size may contribute to reductions in high-speed collisions. We hypothesized that wider football fields lead to a decreased rate of high-speed collisions. Computer football game simulation was developed using MATLAB. Four wide receivers were matched against 7 defensive players. Each offensive player was randomized to one of 5 typical routes on each play. The ball was thrown 3 seconds into play; ball flight time was 2 seconds. Defensive players were delayed 0.5 second before reacting to ball release. A high-speed collision was defined as the receiver converging with a defensive player within 0.5 second of catching the ball. The simulation counted high-speed collisions for 1 team/season (65 plays/game for 16 games/season = 1040 plays/season) averaged during 10 seasons, and was validated against existing data using standard field width (53.3 yards). Field width was increased in 1-yard intervals up to 58.3 yards. Using standard field width, 188 ± 4 high-speed collisions were seen per team per season (18% of plays). When field width increased by 3 yards, high-speed collision rate decreased to 135 ± 3 per team per season (28% decrease; P < 0.0001). Even small increases in football field width can lead to substantial decline in high-speed collisions, with potential for reducing instances of mTBI in football players. Copyright © 2017 Elsevier Inc. All rights reserved.
Users' manual for the Langley high speed propeller noise prediction program (DFP-ATP)
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Tarkenton, G. M.
1989-01-01
The use of the Dunn-Farassat-Padula Advanced Technology Propeller (DFP-ATP) noise prediction program which computes the periodic acoustic pressure signature and spectrum generated by propellers moving with supersonic helical tip speeds is described. The program has the capacity of predicting noise produced by a single-rotation propeller (SRP) or a counter-rotation propeller (CRP) system with steady or unsteady blade loading. The computational method is based on two theoretical formulations developed by Farassat. One formulation is appropriate for subsonic sources, and the other for transonic or supersonic sources. Detailed descriptions of user input, program output, and two test cases are presented, as well as brief discussions of the theoretical formulations and computational algorithms employed.
Aerodynamic heating and surface temperatures on vehicles for computer-aided design studies
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.; Kania, L. A.; Chitty, A.
1983-01-01
A computer subprogram has been developed to calculate aerodynamic and radiative heating rates and to determine surface temperatures by integrating the heating rates along the trajectory of a vehicle. Convective heating rates are calculated by applying the axisymmetric analogue to inviscid surface streamlines and using relatively simple techniques to calculate laminar, transitional, or turbulent heating rates. Options are provided for the selection of gas model, transition criterion, turbulent heating method, Reynolds Analogy factor, and entropy-layer swallowing effects. Heating rates are compared to experimental data, and the time history of surface temperatures are given for a high-speed trajectory. The computer subprogram is developed for preliminary design and mission analysis where parametric studies are needed at all speeds.
Electromagnetic phenomena analysis in brushless DC motor with speed control using PWM method
NASA Astrophysics Data System (ADS)
Ciurys, Marek Pawel
2017-12-01
Field-circuit model of a brushless DC motor with speed control using PWM method was developed. Waveforms of electrical and mechanical quantities of the designed motor with a high pressure vane pump built in a rotor of the motor were computed. Analysis of electromagnetic phenomena in the system: single phase AC network - converter - BLDC motor was carried out.
ERIC Educational Resources Information Center
East, Martin; King, Chris
2012-01-01
In the listening component of the IELTS examination candidates hear the input once, delivered at "normal" speed. This format for listening can be problematic for test takers who often perceive normal speed input to be too fast for effective comprehension. The study reported here investigated whether using computer software to slow down…
High-speed real-time animated displays on the ADAGE (trademark) RDS 3000 raster graphics system
NASA Technical Reports Server (NTRS)
Kahlbaum, William M., Jr.; Ownbey, Katrina L.
1989-01-01
Techniques which may be used to increase the animation update rate of real-time computer raster graphic displays are discussed. They were developed on the ADAGE RDS 3000 graphic system in support of the Advanced Concepts Simulator at the NASA Langley Research Center. These techniques involve the use of a special purpose parallel processor, for high-speed character generation. The description of the parallel processor includes the Barrel Shifter which is part of the hardware and is the key to the high-speed character rendition. The final result of this total effort was a fourfold increase in the update rate of an existing primary flight display from 4 to 16 frames per second.
Controller and interface module for the High-Speed Data Acquisition System correlator/accumulator
NASA Technical Reports Server (NTRS)
Brokl, S. S.
1985-01-01
One complex channel of the High-Speed Data Acquisition System (a subsystem used in the Goldstone solar system radar), consisting of two correlator modules and one accumulator module, is operated by the controller and interface module interfaces are provided to the VAX UNIBUS for computer control, monitor, and test of the controller and correlator/accumulator. The correlator and accumulator modules controlled by this module are the key digital signal processing elements of the Goldstone High-Speed Data Acquisition System. This fully programmable unit provides for a wide variety of correlation and filtering functions operating on a three megaword/second data flow. Data flow is to the VAX by way of the I/O port of a FPS 5210 array processor.
System architecture of a gallium arsenide one-gigahertz digital IC tester
NASA Technical Reports Server (NTRS)
Fouts, Douglas J.; Johnson, John M.; Butner, Steven E.; Long, Stephen I.
1987-01-01
The design for a 1-GHz digital integrated circuit tester for the evaluation of custom GaAs chips and subsystems is discussed. Technology-related problems affecting the design of a GaAs computer are discussed, with emphasis on the problems introduced by long printed-circuit-board interconnect. High-speed interface modules provide a link between the low-speed microprocessor and the chip under test. Memory-multiplexer and memory-shift register architectures for the storage of test vectors are described in addition to an architecture for local data storage consisting of a long chain of GaAs shift registers. The tester is constructed around a VME system card cage and backplane, and very little high-speed interconnect exists between boards. The tester has a three part self-test consisting of a CPU board confidence test, a main memory confidence test, and a high-speed interface module functional test.
ERIC Educational Resources Information Center
Kurzweil, Raymond C.
1994-01-01
Summarizes recent advances in computer simulation and "reverse engineering" technologies, highlighting the Human Genome Project to scan the human genetic code; artificial retina chips to copy the human retina's neural organization; high-speed, high-resolution Magnetic Resonance Imaging scanners; and the virtual book. Discusses…
Ultrascale collaborative visualization using a display-rich global cyberinfrastructure.
Jeong, Byungil; Leigh, Jason; Johnson, Andrew; Renambot, Luc; Brown, Maxine; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung
2010-01-01
The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.
Chen, Lei; Chen, Hongkun; Yang, Jun; Shu, Zhengyu; He, Huiwen; Shu, Xin
2016-01-01
The modified flux-coupling-type superconducting fault current (SFCL) is a high-efficient electrical auxiliary device, whose basic function is to suppress the short-circuit current by controlling the magnetic path through a high-speed switch. In this paper, the high-speed switch is based on electromagnetic repulsion mechanism, and its conceptual design is carried out to promote the application of the modified SFCL. Regarding that the switch which is consisting of a mobile copper disc, two fixed opening and closing coils, the computational method for the electromagnetic force is discussed, and also the dynamic mathematical model including circuit equation, magnetic field equation as well as mechanical motion equation is theoretically deduced. According to the mathematical modeling and calculation of characteristic parameters, a feasible design scheme is presented, and the high-speed switch's response time can be less than 0.5 ms. For that the modified SFCL is equipped with this high-speed switch, the SFCL's application in a 10 kV micro-grid system with multiple renewable energy sources are assessed in the MATLAB software. The simulations are well able to affirm the SFCL's performance behaviors.
High-speed inlet research program and supporting analysis
NASA Technical Reports Server (NTRS)
Coltrin, Robert E.
1990-01-01
The technology challenges faced by the high speed inlet designer are discussed by describing the considerations that went into the design of the Mach 5 research inlet. It is shown that the emerging three dimensional viscous computational fluid dynamics (CFD) flow codes, together with small scale experiments, can be used to guide larger scale full inlet systems research. Then, in turn, the results of the large scale research, if properly instrumented, can be used to validate or at least to calibrate the CFD codes.
Instantaneous Assessment Of Athletic Performance Using High Speed Video
NASA Astrophysics Data System (ADS)
Hubbard, Mont; Alaways, LeRoy W.
1988-02-01
We describe the use of high speed video to provide quantitative assessment of motion in athletic performance. Besides the normal requirement for accuracy, an essential feature is that the information be provided rapidly enough so that it my serve as valuable feedback in the learning process. The general considerations which must be addressed in the development of such a computer based system are discussed. These ideas are illustrated specifically through the description of a prototype system which has been designed for the javelin throw.
The Workstation Approach to Laboratory Computing
Crosby, P.A.; Malachowski, G.C.; Hall, B.R.; Stevens, V.; Gunn, B.J.; Hudson, S.; Schlosser, D.
1985-01-01
There is a need for a Laboratory Workstation which specifically addresses the problems associated with computing in the scientific laboratory. A workstation based on the IBM PC architecture and including a front end data acquisition system which communicates with a host computer via a high speed communications link; a new graphics display controller with hardware window management and window scrolling; and an integrated software package is described.
NASA Astrophysics Data System (ADS)
Makino, Junichiro
2002-12-01
We overview our GRAvity PipE (GRAPE) project to develop special-purpose computers for astrophysical N-body simulations. The basic idea of GRAPE is to attach a custom-build computer dedicated to the calculation of gravitational interaction between particles to a general-purpose programmable computer. By this hybrid architecture, we can achieve both a wide range of applications and very high peak performance. Our newest machine, GRAPE-6, achieved the peak speed of 32 Tflops, and sustained performance of 11.55 Tflops, for the total budget of about 4 million USD. We also discuss relative advantages of special-purpose and general-purpose computers and the future of high-performance computing for science and technology.
Yokohama, Noriya
2013-07-01
This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.
High-performance computing in image registration
NASA Astrophysics Data System (ADS)
Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro
2012-10-01
Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
Multifidelity, multidisciplinary optimization of turbomachines with shock interaction
NASA Astrophysics Data System (ADS)
Joly, Michael Marie
Research on high-speed air-breathing propulsion aims at developing aircraft with antipodal range and space access. Before reaching high speed at high altitude, the flight vehicle needs to accelerate from takeoff to scramjet takeover. Air turbo rocket engines combine turbojet and rocket engine cycles to provide the necessary thrust in the so-called low-speed regime. Challenges related to turbomachinery components are multidisciplinary, since both the high compression ratio compressor and the powering high-pressure turbine operate in the transonic regime in compact environments with strong shock interactions. Besides, lightweight is vital to avoid hindering the scramjet operation. Recent progress in evolutionary computing provides aerospace engineers with robust and efficient optimization algorithms to address concurrent objectives. The present work investigates Multidisciplinary Design Optimization (MDO) of innovative transonic turbomachinery components. Inter-stage aerodynamic shock interaction in turbomachines are known to generate high-cycle fatigue on the rotor blades compromising their structural integrity. A soft-computing strategy is proposed to mitigate the vane downstream distortion, and shown to successfully attenuate the unsteady forcing on the rotor of a high-pressure turbine. Counter-rotation offers promising prospects to reduce the weight of the machine, with fewer stages and increased load per row. An integrated approach based on increasing level of fidelity and aero-structural coupling is then presented and allows achieving a highly loaded compact counter-rotating compressor.
NASA Technical Reports Server (NTRS)
Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.
2000-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
NASA Technical Reports Server (NTRS)
Ledbetter, Kenneth W.
1992-01-01
Four trends in spacecraft flight operations are discussed which will reduce overall program costs. These trends are the use of high-speed, highly reliable data communications systems for distributing operations functions to more convenient and cost-effective sites; the improved capability for remote operation of sensors; a continued rapid increase in memory and processing speed of flight qualified computer chips; and increasingly capable ground-based hardware and software systems, notably those augmented by artificial intelligence functions. Changes reflected by these trends are reviewed starting from the NASA Viking missions of the early 70s, when mission control was conducted at one location using expensive and cumbersome mainframe computers and communications equipment. In the 1980s, powerful desktop computers and modems enabled the Magellan project team to operate the spacecraft remotely. In the 1990s, the Hubble Space Telescope project uses multiple color screens and automated sequencing software on small computers. Given a projection of current capabilities, future control centers will be even more cost-effective.
A high-speed linear algebra library with automatic parallelism
NASA Technical Reports Server (NTRS)
Boucher, Michael L.
1994-01-01
Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.
2010-01-01
service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-07
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Scalable Multiprocessor for High-Speed Computing in Space
NASA Technical Reports Server (NTRS)
Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard
2004-01-01
A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.
High speed propeller performance and noise predictions at takeoff/landing conditions
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Woodward, R. P.; Groeneweg, J. F.
1988-01-01
The performance and noise of a high speed SR-7A model propeller under takeoff/landing conditions are considered. The blade loading distributions are obtained by solving the three-dimensional Euler equations and the sound pressure levels are computed using a time domain approach. At the nominal takeoff operating point, the blade sections near the hub are lightly or negatively loaded. The chordwise loading distributions are distinctly different from those of cruise conditions. The noise of the SR-7A model propeller at takeoff is dominated by the loading noise, similar to that at cruise conditions. The waveforms of the acoustic pressure signature are nearly sinusoidal in the plane of the propeller. The computed directivity of the blade passing frequency tone agrees fairly well with the data at nominal takeoff blade angle.
High speed propeller performance and noise predictions at takeoff/landing conditions
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Woodward, R. P.; Groeneweg, J. F.
1987-01-01
The performance and noise of a high speed SR-7A model propeller under takeoff/landing conditions are considered. The blade loading distributions are obtained by solving the three-dimensional Euler equations and the sound pressure levels are computed using a time domain approach. At the nominal takeoff operating point, the blade sections near the hub are lightly or negatively loaded. The chordwise loading distributions are distinctly different from those of cruise conditions. The noise of the SR-7A model propeller at takeoff is dominated by the loading noise, similar to that at cruise conditions. The waveforms of the acoustic pressure signature are nearly sinusoidal in the plane of the propeller. The computed directivity of the blade passing frequency tone agrees fairly well with the data at nominal takeoff blade angle.
Role of optical computers in aeronautical control applications
NASA Technical Reports Server (NTRS)
Baumbick, R. J.
1981-01-01
The role that optical computers play in aircraft control is determined. The optical computer has the potential high speed capability required, especially for matrix/matrix operations. The optical computer also has the potential for handling nonlinear simulations in real time. They are also more compatible with fiber optic signal transmission. Optics also permit the use of passive sensors to measure process variables. No electrical energy need be supplied to the sensor. Complex interfacing between optical sensors and the optical computer is avoided if the optical sensor outputs can be directly processed by the optical computer.
The Mark III Hypercube-Ensemble Computers
NASA Technical Reports Server (NTRS)
Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe
1988-01-01
Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.
NASA Astrophysics Data System (ADS)
Yoshino, Ken-ichiro; Fujiwara, Mikio; Nakata, Kensuke; Sumiya, Tatsuya; Sasaki, Toshihiko; Takeoka, Masahiro; Sasaki, Masahide; Tajima, Akio; Koashi, Masato; Tomita, Akihisa
2018-03-01
Quantum key distribution (QKD) allows two distant parties to share secret keys with the proven security even in the presence of an eavesdropper with unbounded computational power. Recently, GHz-clock decoy QKD systems have been realized by employing ultrafast optical communication devices. However, security loopholes of high-speed systems have not been fully explored yet. Here we point out a security loophole at the transmitter of the GHz-clock QKD, which is a common problem in high-speed QKD systems using practical band-width limited devices. We experimentally observe the inter-pulse intensity correlation and modulation pattern-dependent intensity deviation in a practical high-speed QKD system. Such correlation violates the assumption of most security theories. We also provide its countermeasure which does not require significant changes of hardware and can generate keys secure over 100 km fiber transmission. Our countermeasure is simple, effective and applicable to wide range of high-speed QKD systems, and thus paves the way to realize ultrafast and security-certified commercial QKD systems.
Software Developed for Analyzing High- Speed Rolling-Element Bearings
NASA Technical Reports Server (NTRS)
Fleming, David P.
2005-01-01
COBRA-AHS (Computer Optimized Ball & Roller Bearing Analysis--Advanced High Speed, J.V. Poplawski & Associates, Bethlehem, PA) is used for the design and analysis of rolling element bearings operating at high speeds under complex mechanical and thermal loading. The code estimates bearing fatigue life by calculating three-dimensional subsurface stress fields developed within the bearing raceways. It provides a state-of-the-art interactive design environment for bearing engineers within a single easy-to-use design-analysis package. The code analyzes flexible or rigid shaft systems containing up to five bearings acted upon by radial, thrust, and moment loads in 5 degrees of freedom. Bearing types include high-speed ball, cylindrical roller, and tapered roller bearings. COBRA-AHS is the first major upgrade in 30 years of such commercially available bearing software. The upgrade was developed under a Small Business Innovation Research contract from the NASA Glenn Research Center, and incorporates the results of 30 years of NASA and industry bearing research and technology.
Numerical simulation on a straight-bladed vertical axis wind turbine with auxiliary blade
NASA Astrophysics Data System (ADS)
Li, Y.; Zheng, Y. F.; Feng, F.; He, Q. B.; Wang, N. X.
2016-08-01
To improve the starting performance of the straight-bladed vertical axis wind turbine (SB-VAWT) at low wind speed, and the output characteristics at high wind speed, a flexible, scalable auxiliary vane mechanism was designed and installed into the rotor of SB-VAWT in this study. This new vertical axis wind turbine is a kind of lift-to-drag combination wind turbine. The flexible blade expanded, and the driving force of the wind turbines comes mainly from drag at low rotational speed. On the other hand, the flexible blade is retracted at higher speed, and the driving force is primarily from a lift. To research the effects of the flexible, scalable auxiliary module on the performance of SB-VAWT and to find its best parameters, the computational fluid dynamics (CFD) numerical calculation was carried out. The calculation result shows that the flexible, scalable blades can automatic expand and retract with the rotational speed. The moment coefficient at low tip speed ratio increased substantially. Meanwhile, the moment coefficient has also been improved at high tip speed ratios in certain ranges.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
QPSO-Based Adaptive DNA Computing Algorithm
Karakose, Mehmet; Cigdem, Ugur
2013-01-01
DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409
High Speed Jet Noise Prediction Using Large Eddy Simulation
NASA Technical Reports Server (NTRS)
Lele, Sanjiva K.
2002-01-01
Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.
Advanced digital signal processing for short haul optical fiber transmission beyond 100G
NASA Astrophysics Data System (ADS)
Kikuchi, Nobuhiko
2017-01-01
Significant increase of intra and inter data center traffic has been expected by the rapid spread of various network applications like SNS, IoT, mobile and cloud computing, and the needs for ultra-high speed and cost-effective short- to medium-reach optical fiber links beyond 100-Gbit/s is becoming larger and larger. Such high-speed links typically use multilevel modulation to lower signaling speed, which in turn face serious challenges in limited loss budget and waveform distortion tolerance. One of the promising techniques to overcome them is the use of advanced digital signal processing (DSP) and we review various DSP applications for short-to-medium reach applications.
A high-speed BCI based on code modulation VEP
NASA Astrophysics Data System (ADS)
Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai
2011-04-01
Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, W.K.; Hubbard, B.
1997-11-04
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a ``hardwired`` processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer. 19 figs.
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, William K.; Hubbard, Bradley
1997-01-01
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a "hardwired" processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer.
High-Speed Current dq PI Controller for Vector Controlled PMSM Drive
Reaz, Mamun Bin Ibne; Rahman, Labonnah Farzana; Chang, Tae Gyu
2014-01-01
High-speed current controller for vector controlled permanent magnet synchronous motor (PMSM) is presented. The controller is developed based on modular design for faster calculation and uses fixed-point proportional-integral (PI) method for improved accuracy. Current dq controller is usually implemented in digital signal processor (DSP) based computer. However, DSP based solutions are reaching their physical limits, which are few microseconds. Besides, digital solutions suffer from high implementation cost. In this research, the overall controller is realizing in field programmable gate array (FPGA). FPGA implementation of the overall controlling algorithm will certainly trim down the execution time significantly to guarantee the steadiness of the motor. Agilent 16821A Logic Analyzer is employed to validate the result of the implemented design in FPGA. Experimental results indicate that the proposed current dq PI controller needs only 50 ns of execution time in 40 MHz clock, which is the lowest computational cycle for the era. PMID:24574913
Dark-count-less photon-counting x-ray computed tomography system using a YAP-MPPC detector
NASA Astrophysics Data System (ADS)
Sato, Eiichi; Sato, Yuich; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2012-10-01
A high-sensitive X-ray computed tomography (CT) system is useful for decreasing absorbed dose for patients, and a dark-count-less photon-counting CT system was developed. X-ray photons are detected using a YAP(Ce) [cerium-doped yttrium aluminum perovskite] single crystal scintillator and an MPPC (multipixel photon counter). Photocurrents are amplified by a high-speed current-voltage amplifier, and smooth event pulses from an integrator are sent to a high-speed comparator. Then, logical pulses are produced from the comparator and are counted by a counter card. Tomography is accomplished by repeated linear scans and rotations of an object, and projection curves of the object are obtained by the linear scan. The image contrast of gadolinium medium slightly fell with increase in lower-level voltage (Vl) of the comparator. The dark count rate was 0 cps, and the count rate for the CT was approximately 250 kcps.
Method to predict external store carriage characteristics at transonic speeds
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1988-01-01
Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.
Validation of the solar heating and cooling high speed performance (HISPER) computer code
NASA Technical Reports Server (NTRS)
Wallace, D. B.
1980-01-01
Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.
Simulations of High Speed Fragment Trajectories
NASA Astrophysics Data System (ADS)
Yeh, Peter; Attaway, Stephen; Arunajatesan, Srinivasan; Fisher, Travis
2017-11-01
Flying shrapnel from an explosion are capable of traveling at supersonic speeds and distances much farther than expected due to aerodynamic interactions. Predicting the trajectories and stable tumbling modes of arbitrary shaped fragments is a fundamental problem applicable to range safety calculations, damage assessment, and military technology. Traditional approaches rely on characterizing fragment flight using a single drag coefficient, which may be inaccurate for fragments with large aspect ratios. In our work we develop a procedure to simulate trajectories of arbitrary shaped fragments with higher fidelity using high performance computing. We employ a two-step approach in which the force and moment coefficients are first computed as a function of orientation using compressible computational fluid dynamics. The force and moment data are then input into a six-degree-of-freedom rigid body dynamics solver to integrate trajectories in time. Results of these high fidelity simulations allow us to further understand the flight dynamics and tumbling modes of a single fragment. Furthermore, we use these results to determine the validity and uncertainty of inexpensive methods such as the single drag coefficient model.
Analysis of Low-Speed Stall Aerodynamics of a Business Jets Wing Using STAR-CCM+
NASA Technical Reports Server (NTRS)
Bui, Trong
2016-01-01
Reynolds-Averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) analysis was conducted: to study the low-speed stall aerodynamics of a GIII aircrafts swept wing modified with (1) a laminar-flow wing glove, or (2) a seamless flap. The stall aerodynamics of these two different wing configurations were analyzed and compared with the unmodified baseline wing for low-speed flight. The Star-CCM+ polyhedral unstructured CFD code was first validated for wing stall predictions using the wing-body geometry from the First AIAA CFD High-Lift Prediction Workshop.
NASA Technical Reports Server (NTRS)
Banks, Daniel W.; Laflin, Brenda E. Gile; Kemmerly, Guy T.; Campbell, Bryan A.
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
Technical Evaluation of the RHS 200 for High Speed Ferry Applications and Coast Guard Missions
1983-12-01
considered separate hullborne and foilborne modes of operation. The TDAS computer was programmed to select the followinq calibration curve at transducer...such brief bursts of data. The computer used in the PSD analysis was niot programmed to process and average consecutive spectra from a long term data...152 27 - ilydrostatic Computer Procgram Input (03-600 Version) .................. 158 2R - Operatincr and
NASA Astrophysics Data System (ADS)
Wang, Wenyun; Guo, Yingfu
2008-12-01
Phase-shifting methods for 3-D shape measurement have long been employed in optical metrology for their speed and accuracy. For real-time, accurate, 3-D shape measurement, a four-step phase-shifting algorithm which has the advantage of its symmetry is a good choice; however, its measurement error is sensitive to any fringe image errors caused by various sources such as motion blur. To alleviate this problem, a fast two-plus-one phase-shifting algorithm is proposed in this paper. This kind of technology will benefit many applications such as medical imaging, gaming, animation, computer vision, computer graphics, etc.
Imaging photomultiplier array with integrated amplifiers and high-speed USB interfacea)
NASA Astrophysics Data System (ADS)
Blacksell, M.; Wach, J.; Anderson, D.; Howard, J.; Collis, S. M.; Blackwell, B. D.; Andruczyk, D.; James, B. W.
2008-10-01
Multianode photomultiplier tube (PMT) arrays are finding application as convenient high-speed light sensitive devices for plasma imaging. This paper describes the development of a USB-based "plug-n-play" 16-channel PMT camera with 16bits simultaneous acquisition of 16 signal channels at rates up to 2MS/s per channel. The preamplifiers and digital hardware are packaged in a compact housing which incorporates magnetic shielding, on-board generation of the high-voltage PMT bias, an optical filter mount and slits, and F-mount lens adaptor. Triggering, timing, and acquisition are handled by four field-programmable gate arrays (FPGAs) under instruction from a master FPGA controlled by a computer with a LABVIEW interface. We present technical design details and specifications and illustrate performance with high-speed images obtained on the H-1 heliac at the ANU.
Imaging photomultiplier array with integrated amplifiers and high-speed USB interface.
Blacksell, M; Wach, J; Anderson, D; Howard, J; Collis, S M; Blackwell, B D; Andruczyk, D; James, B W
2008-10-01
Multianode photomultiplier tube (PMT) arrays are finding application as convenient high-speed light sensitive devices for plasma imaging. This paper describes the development of a USB-based "plug-n-play" 16-channel PMT camera with 16 bits simultaneous acquisition of 16 signal channels at rates up to 2 MSs per channel. The preamplifiers and digital hardware are packaged in a compact housing which incorporates magnetic shielding, on-board generation of the high-voltage PMT bias, an optical filter mount and slits, and F-mount lens adaptor. Triggering, timing, and acquisition are handled by four field-programmable gate arrays (FPGAs) under instruction from a master FPGA controlled by a computer with a LABVIEW interface. We present technical design details and specifications and illustrate performance with high-speed images obtained on the H-1 heliac at the ANU.
Speed control for a mobile robot
NASA Astrophysics Data System (ADS)
Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.
Effects of Inlet Distortion on Aeromechanical Stability of a Forward-Swept High-Speed Fan
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.
2011-01-01
Concerns regarding noise, propulsive efficiency, and fuel burn are inspiring aircraft designs wherein the propulsive turbomachines are partially (or fully) embedded within the airframe; such designs present serious concerns with regard to aerodynamic and aeromechanic performance of the compression system in response to inlet distortion. Separately, a forward-swept high-speed fan was developed to address noise concerns of modern podded turbofans; however this fan encounters aeroelastic instability (flutter) as it approaches stall. A three-dimensional, unsteady, Navier-Stokes computational fluid dynamics code is applied to analyze and corroborate fan performance with clean inlet flow. This code, already validated in its application to assess aerodynamic damping of vibrating blades at various flow conditions, is modified and then applied in a computational study to preliminarily assess the effects of inlet distortion on aeroelastic stability of the fan. Computational engineering application and implementation issues are discussed, followed by an investigation into the aeroelastic behavior of the fan with clean and distorted inlets.
A review of high-speed, convective, heat-transfer computation methods
NASA Technical Reports Server (NTRS)
Tauber, Michael E.
1989-01-01
The objective of this report is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. A discussion is given of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Also discussed are a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.
A review of high-speed, convective, heat-transfer computation methods
NASA Technical Reports Server (NTRS)
Tauber, Michael E.
1989-01-01
The objective is to provide useful engineering formulations and to instill a modest degree of physical understanding of the phenomena governing convective aerodynamic heating at high flight speeds. Some physical insight is not only essential to the application of the information presented here, but also to the effective use of computer codes which may be available to the reader. Given first is a discussion of cold-wall, laminar boundary layer heating. A brief presentation of the complex boundary layer transition phenomenon follows. Next, cold-wall turbulent boundary layer heating is discussed. This topic is followed by a brief coverage of separated flow-region and shock-interaction heating. A review of heat protection methods follows, including the influence of mass addition on laminar and turbulent boundary layers. Next is a discussion of finite-difference computer codes and a comparison of some results from these codes. An extensive list of references is also provided from sources such as the various AIAA journals and NASA reports which are available in the open literature.
Silicon microdisk-based full adders for optical computing.
Ying, Zhoufeng; Wang, Zheng; Zhao, Zheng; Dhar, Shounak; Pan, David Z; Soref, Richard; Chen, Ray T
2018-03-01
Due to the projected saturation of Moore's law, as well as the drastically increasing trend of bandwidth with lower power consumption, silicon photonics has emerged as one of the most promising alternatives that has attracted a lasting interest due to the accessibility and maturity of ultra-compact passive and active integrated photonic components. In this Letter, we demonstrate a ripple-carry electro-optic 2-bit full adder using microdisks, which replaces the core part of an electrical full adder by optical counterparts and uses light to carry signals from one bit to the next with high bandwidth and low power consumption per bit. All control signals of the operands are applied simultaneously within each clock cycle. Thus, the severe latency issue that accumulates as the size of the full adder increases can be circumvented, allowing for an improvement in computing speed and a reduction in power consumption. This approach paves the way for future high-speed optical computing systems in the post-Moore's law era.
Fast distributed large-pixel-count hologram computation using a GPU cluster.
Pan, Yuechao; Xu, Xuewu; Liang, Xinan
2013-09-10
Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
NASA Astrophysics Data System (ADS)
Lorenzi, M.; Mitroglou, N.; Santini, M.; Gavaises, M.
2017-03-01
An experimental technique for the estimation of the temporal-averaged vapour volume fraction within high-speed cavitating flow orifices is presented. The scientific instrument is designed to employ X-ray micro computed tomography (microCT) as a quantitative 3D measuring technique applied to custom designed, large-scale, orifice-type flow channels made from Polyether-ether-ketone (PEEK). The attenuation of the ionising electromagnetic radiation by the fluid under examination depends on its local density; the transmitted radiation through the cavitation volume is compared to the incident radiation, and combination of radiographies from sufficient number of angles leads to the reconstruction of attenuation coefficients versus the spatial position. This results to a 3D volume fraction distribution measurement of the developing multiphase flow. The experimental results obtained are compared against the high speed shadowgraph visualisation images obtained in an optically transparent nozzle with identical injection geometry; comparison between the temporal mean image and the microCT reconstruction shows excellent agreement. At the same time, the real 3D internal channel geometry (possibly eroded) has been measured and compared to the nominal manufacturing CAD drawing of the test nozzle.
A high-speed brain speller using steady-state visual evoked potentials.
Nakanishi, Masaki; Wang, Yijun; Wang, Yu-Te; Mitsukura, Yasue; Jung, Tzyy-Ping
2014-09-01
Implementing a complex spelling program using a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) remains a challenge due to difficulties in stimulus presentation and target identification. This study aims to explore the feasibility of mixed frequency and phase coding in building a high-speed SSVEP speller with a computer monitor. A frequency and phase approximation approach was developed to eliminate the limitation of the number of targets caused by the monitor refresh rate, resulting in a speller comprising 32 flickers specified by eight frequencies (8-15 Hz with a 1 Hz interval) and four phases (0°, 90°, 180°, and 270°). A multi-channel approach incorporating Canonical Correlation Analysis (CCA) and SSVEP training data was proposed for target identification. In a simulated online experiment, at a spelling rate of 40 characters per minute, the system obtained an averaged information transfer rate (ITR) of 166.91 bits/min across 13 subjects with a maximum individual ITR of 192.26 bits/min, the highest ITR ever reported in electroencephalogram (EEG)-based BCIs. The results of this study demonstrate great potential of a high-speed SSVEP-based BCI in real-life applications.
Modeling of High Speed Reacting Flows: Established Practices and Future Challenges
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2004-01-01
Computational fluid dynamics (CFD) has proven to be an invaluable tool for the design and analysis of high- speed propulsion devices. Massively parallel computing, together with the maturation of robust CFD codes, has made it possible to perform simulations of complete engine flowpaths. Steady-state Reynolds-Averaged Navier-Stokes simulations are now routinely used in the scramjet engine development cycle to determine optimal fuel injector arrangements, investigate trends noted during testing, and extract various measures of engine efficiency. Unfortunately, the turbulence and combustion models used in these codes have not changed significantly over the past decade. Hence, the CFD practitioner must often rely heavily on existing measurements (at similar flow conditions) to calibrate model coefficients on a case- by-case basis. This paper provides an overview of the modeled equations typically employed by commercial- quality CFD codes for high-speed combustion applications. Careful attention is given to the approximations employed for each of the unclosed terms in the averaged equation set. The salient features (and shortcomings) of common models used to close these terms are covered in detail, and several academic efforts aimed at addressing these shortcomings are discussed.
Lorenzi, M; Mitroglou, N; Santini, M; Gavaises, M
2017-03-01
An experimental technique for the estimation of the temporal-averaged vapour volume fraction within high-speed cavitating flow orifices is presented. The scientific instrument is designed to employ X-ray micro computed tomography (microCT) as a quantitative 3D measuring technique applied to custom designed, large-scale, orifice-type flow channels made from Polyether-ether-ketone (PEEK). The attenuation of the ionising electromagnetic radiation by the fluid under examination depends on its local density; the transmitted radiation through the cavitation volume is compared to the incident radiation, and combination of radiographies from sufficient number of angles leads to the reconstruction of attenuation coefficients versus the spatial position. This results to a 3D volume fraction distribution measurement of the developing multiphase flow. The experimental results obtained are compared against the high speed shadowgraph visualisation images obtained in an optically transparent nozzle with identical injection geometry; comparison between the temporal mean image and the microCT reconstruction shows excellent agreement. At the same time, the real 3D internal channel geometry (possibly eroded) has been measured and compared to the nominal manufacturing CAD drawing of the test nozzle.
Optical computing and image processing using photorefractive gallium arsenide
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen; Liu, Duncan T. H.
1990-01-01
Recent experimental results on matrix-vector multiplication and multiple four-wave mixing using GaAs are presented. Attention is given to a simple concept of using two overlapping holograms in GaAs to do two matrix-vector multiplication processes operating in parallel with a common input vector. This concept can be used to construct high-speed, high-capacity, reconfigurable interconnection and multiplexing modules, important for optical computing and neural-network applications.
Digital optical computers at the optoelectronic computing systems center
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.
Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds
NASA Astrophysics Data System (ADS)
Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.
In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.
Aerodynamic optimization of aircraft wings using a coupled VLM-2.5D RANS approach
NASA Astrophysics Data System (ADS)
Parenteau, Matthieu
The design process of transonic civil aircraft is complex and requires strong governance to manage the various program development phases. There is a need in the community to have numerical models in all disciplines that span the conceptual, preliminary and detail design phases in a seamless fashion so that choices made in each phase remain consistent with each other. The objective of this work is to develop an aerodynamic model suitable for conceptual multidisciplinary design optimization with low computational cost and sufficient fidelity to explore a large design space in the transonic and high-lift regimes. The physics-based reduce order model is based on the inviscid Vortex Lattice Method (VLM), selected for its low computation time. Viscous effects are modeled with two-dimensional high-fidelity RANS calculations at various sections along the span and incorporated as an angle of attack correction inside the VLM. The viscous sectional data are calculated with infinite swept wing conditions to allow viscous crossflow effects to be included for a more accurate maximum lift coefficient and spanload evaluations. These viscous corrections are coupled through a modified alpha coupling method for 2.5D RANS sectional data, stabilized in the post-stall region with artificial dissipation. The fidelity of the method is verified against 3D RANS flow solver solutions on the Bombardier Research Wing (BRW). Clean and high-lift configurations are investigated. The overall results show impressive precision of the VLM/2.5D RANS approach compared to 3D RANS solutions and in compute times in the order of seconds on a standard desktop computer. Finally, the aerodynamic solver is implemented in an optimization framework with a Covariant Matrix Adaptation Evolution Strategy (CMA-ES) optimizer to explore the design space of aerodynamic wing planform. Single-objective low-speed and high-speed optimizations are performed along with composite-objective functions for combined low-speed and high-speed optimizations with high-lift configurations as well. Moreover, the VLM/2.5D approach is capable of capturing stall cells phenomena and this characteristic is used to define a new spanwise stall criteria to be introduced as an optimization constraint. The work concludes on the limitations of the method and possible avenues for further research. None
High-performance computing on GPUs for resistivity logging of oil and gas wells
NASA Astrophysics Data System (ADS)
Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.
2017-10-01
We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.
Lecture notes in economics and mathematical system. Volume 150: Supercritical wing sections 3
NASA Technical Reports Server (NTRS)
Bauer, F.; Garabedian, P.; Korn, D.
1977-01-01
Application of computational fluid dynamics to the design and analysis of supercritical wing sections is discussed. Computer programs used to study the flight of modern aircraft at high subsonic speeds are listed and described. The cascades of shockless transonic airfoils that are expected to increase the efficiency of compressors and turbines are included.
Variety and evolution of American endoscopic image management and recording systems.
Korman, L Y
1996-03-01
The rapid evolution of computing technology has and will continue to alter the practice of gastroenterology and gastrointestinal endoscopy. Development of communication standards for text, images, and security systems will be necessary for medicine to take advantage of high-speed computing and communications. Professional societies can have an important role in guiding the development process.
Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 2: User's manual
NASA Technical Reports Server (NTRS)
Kleckner, R. J.; Dyba, G. J.
1980-01-01
The user's guide for the SPHERBEAN computer program for prediction of the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings is presented. The material presented is structured to guide the user in the practical and correct implementation of SPHERBEAN. Input and output, guidelines for program use, and sample executions are detailed.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis
NASA Astrophysics Data System (ADS)
Park, Sunyoung; Ishii, Miaki
2018-06-01
A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.
Assessment of computational issues associated with analysis of high-lift systems
NASA Technical Reports Server (NTRS)
Balasubramanian, R.; Jones, Kenneth M.; Waggoner, Edgar G.
1992-01-01
Thin-layer Navier-Stokes calculations for wing-fuselage configurations from subsonic to hypersonic flow regimes are now possible. However, efficient, accurate solutions for using these codes for two- and three-dimensional high-lift systems have yet to be realized. A brief overview of salient experimental and computational research is presented. An assessment of the state-of-the-art relative to high-lift system analysis and identification of issues related to grid generation and flow physics which are crucial for computational success in this area are also provided. Research in support of the high-lift elements of NASA's High Speed Research and Advanced Subsonic Transport Programs which addresses some of the computational issues is presented. Finally, fruitful areas of concentrated research are identified to accelerate overall progress for high lift system analysis and design.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori
2013-06-01
The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
2012-04-03
Shock initiation in a plastic-bonded explosives (PBX) is due to hot spots. Current reactive burn models are based, at least heuristically, on the ignition and growth concept. The ignition phase occurs when a small localized region of high temperature (or hot spot) burns on a fast time scale. This is followed by a growth phase in which a reactive front spreads out from the hot spot. Propagating reactive fronts are deflagration waves. A key question is the deflagration speed in a PBX compressed and heated by a shock wave that generated the hot spot. Here, the ODEs for a steadymore » deflagration wave profile in a compressible fluid are derived, along with the needed thermodynamic quantities of realistic equations of state corresponding to the reactants and products of a PBX. The properties of the wave profile equations are analyzed and an algorithm is derived for computing the deflagration speed. As an illustrative example, the algorithm is applied to compute the deflagration speed in shock compressed PBX 9501 as a function of shock pressure. The calculated deflagration speed, even at the CJ pressure, is low compared to the detonation speed. The implication of this are briefly discussed.« less
Aeroelastic Deflection of NURBS Geometry
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1998-01-01
The purpose of this paper is to present an algorithm for using NonUniform Rational B-Spline (NURBS) representation in an aeroelastic loop. The algorithm is based on creating a least-squares NURBS surface representing the aeroelastic defection. The resulting NURBS surfaces are used to update either the original Computer- Aided Design (CAD) model, Computational Structural Mechanics (CSM) grid or the Computational Fluid Dynamics (CFD) grid. Results are presented for a generic High-Speed Civil Transport (HSCT).
Adaptive-optics optical coherence tomography processing using a graphics processing unit.
Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T
2014-01-01
Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.
Multiscale Space-Time Computational Methods for Fluid-Structure Interactions
2015-09-13
prescribed fully or partially, is from an actual locust, extracted from high-speed, multi-camera video recordings of the locust in a wind tunnel . We use...With creative methods for coupling the fluid and structure, we can increase the scope and efficiency of the FSI modeling . Multiscale methods, which now...play an important role in computational mathematics, can also increase the accuracy and efficiency of the computer modeling techniques. The main
Radiology: "killer app" for next generation networks?
McNeill, Kevin M
2004-03-01
The core principles of digital radiology were well developed by the end of the 1980 s. During the following decade tremendous improvements in computer technology enabled realization of those principles at an affordable cost. In this decade work can focus on highly distributed radiology in the context of the integrated health care enterprise. Over the same period computer networking has evolved from a relatively obscure field used by a small number of researchers across low-speed serial links to a pervasive technology that affects nearly all facets of society. Development directions in network technology will ultimately provide end-to-end data paths with speeds that match or exceed the speeds of data paths within the local network and even within workstations. This article describes key developments in Next Generation Networks, potential obstacles, and scenarios in which digital radiology can become a "killer app" that helps to drive deployment of new network infrastructure.
DOT National Transportation Integrated Search
2000-03-09
The Texas Department of Transportations (TxDOT) "smart highway" project, called TransGuide, scheduled to go on line in 1995 in San Antonio, utilizes high speed computer technology to help drivers anticipate traffic conditions-- in an effort to inc...
Large Scale Traffic Simulations
DOT National Transportation Integrated Search
1997-01-01
Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...
Novel wavelength diversity technique for high-speed atmospheric turbulence compensation
NASA Astrophysics Data System (ADS)
Arrasmith, William W.; Sullivan, Sean F.
2010-04-01
The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.
Readout Electronics for the Central Drift Chamber of the Belle-II Detector
NASA Astrophysics Data System (ADS)
Uchida, Tomohisa; Taniguchi, Takashi; Ikeno, Masahiro; Iwasaki, Yoshihito; Saito, Masatoshi; Shimazaki, Shoichi; Tanaka, Manobu M.; Taniguchi, Nanae; Uno, Shoji
2015-08-01
We have developed readout electronics for the central drift chamber (CDC) of the Belle-II detector. The space near the endplate of the CDC for installation of the electronics was limited by the detector structure. Due to the large amounts of data generated by the CDC, a high-speed data link, with a greater than one gigabit transfer rate, was required to transfer the data to a back-end computer. A new readout module was required to satisfy these requirements. This module processes 48 signals from the CDC, converts them to digital data and transfers it directly to the computer. All functions that transfer digital data via the high speed link were implemented on the single module. We have measured its electrical characteristics and confirmed that the results satisfy the requirements of the Belle-II experiment.
NASA Technical Reports Server (NTRS)
1984-01-01
NASA has planned a supercomputer for computational fluid dynamics research since the mid-1970's. With the approval of the Numerical Aerodynamic Simulation Program as a FY 1984 new start, Congress requested an assessment of the program's objectives, projected short- and long-term uses, program design, computer architecture, user needs, and handling of proprietary and classified information. Specifically requested was an examination of the merits of proceeding with multiple high speed processor (HSP) systems contrasted with a single high speed processor system. The panel found NASA's objectives and projected uses sound and the projected distribution of users as realistic as possible at this stage. The multiple-HSP, whereby new, more powerful state-of-the-art HSP's would be integrated into a flexible network, was judged to present major advantages over any single HSP system.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
Woo, Kevin L; Rieucau, Guillaume; Burke, Darren
2017-02-01
Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.
NASA Astrophysics Data System (ADS)
Kelkar, Nikhal; Samu, Tayib; Hall, Ernest L.
1997-09-01
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The controller incorporates a fuzzy logic approach for steering and speed control, a neuro-fuzzy approach for ultrasound sensing (not discussed in this paper) and an overall expert system. The advantages of a modular system are related to portability and transportability, i.e. any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic controller is supervised by a 486 computer through a multi-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. This micro- controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system in which high speed computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected by a vision tracking device that transmits the X, Y coordinates of the lane marker to the control computer. Simulation and testing of these systems yielded promising results. This design, in its modularity, creates a portable autonomous fuzzy logic controller applicable to any mobile vehicle with only minor adaptations.
Overview of Variable-Speed Power-Turbine Research
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2011-01-01
The vertical take-off and landing (VTOL) and high-speed cruise capability of the NASA Large Civil Tilt-Rotor (LCTR) notional vehicle is envisaged to enable increased throughput in the national airspace. A key challenge of the LCTR is the requirement to vary the main rotor speeds from 100% at take-off to near 50% at cruise as required to minimize mission fuel burn. The variable-speed power-turbine (VSPT), driving a fixed gear-ratio transmission, provides one approach for effecting this wide speed variation. The key aerodynamic and rotordynamic challenges of the VSPT were described in the FAP Conference presentation. The challenges include maintaining high turbine efficiency at high work factor, wide (60 deg.) of incidence variation in all blade rows due to the speed variation, and operation at low Reynolds numbers (with transitional flow). The PT -shaft of the VSPT must be designed for safe operation in the wide speed range required, and therefore poses challenges associated with rotordynamics. The technical challenges drive research activities underway at NASA. An overview of the NASA SRW VSPT research activities was provided. These activities included conceptual and preliminary aero and mechanical (rotordynamics) design of the VSPT for the LCTR application, experimental and computational research supporting the development of incidence tolerant blading, and steps toward component-level testing of a variable-speed power-turbine of relevance to the LCTR application.
Development and numerical analysis of low specific speed mixed-flow pump
NASA Astrophysics Data System (ADS)
Li, H. F.; Huo, Y. W.; Pan, Z. B.; Zhou, W. C.; He, M. H.
2012-11-01
With the development of the city, the market of the mixed flow pump with large flux and high head is prospect. The KSB Shanghai Pump Co., LTD decided to develop low speed specific speed mixed flow pump to meet the market requirements. Based on the centrifugal pump and axial flow pump model, aiming at the characteristics of large flux and high head, a new type of guide vane mixed flow pump was designed. The computational fluid dynamics method was adopted to analyze the internal flow of the new type model and predict its performances. The time-averaged Navier-Stokes equations were closed by SST k-ω turbulent model to adapt internal flow of guide vane with larger curvatures. The multi-reference frame(MRF) method was used to deal with the coupling of rotating impeller and static guide vane, and the SIMPLEC method was adopted to achieve the coupling solution of velocity and pressure. The computational results shows that there is great flow impact on the head of vanes at different working conditions, and there is great flow separation at the tailing of the guide vanes at different working conditions, and all will affect the performance of pump. Based on the computational results, optimizations were carried out to decrease the impact on the head of vanes and flow separation at the tailing of the guide vanes. The optimized model was simulated and its performance was predicted. The computational results show that the impact on the head of vanes and the separation at the tailing of the guide vanes disappeared. The high efficiency of the optimized pump is wide, and it fit the original design destination. The newly designed mixed flow pump is now in modeling and its experimental performance will be getting soon.
High Performance Computing Software Applications for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.
The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2003-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
High-speed on-chip windowed centroiding using photodiode-based CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor); Sun, Chao (Inventor); Yang, Guang (Inventor); Cunningham, Thomas J. (Inventor); Hancock, Bruce (Inventor)
2004-01-01
A centroid computation system is disclosed. The system has an imager array, a switching network, computation elements, and a divider circuit. The imager array has columns and rows of pixels. The switching network is adapted to receive pixel signals from the image array. The plurality of computation elements operates to compute inner products for at least x and y centroids. The plurality of computation elements has only passive elements to provide inner products of pixel signals the switching network. The divider circuit is adapted to receive the inner products and compute the x and y centroids.
Numerical computation of viscous flow around bodies and wings moving at supersonic speeds
NASA Technical Reports Server (NTRS)
Tannehill, J. C.
1984-01-01
Research in aerodynamics is discussed. The development of equilibrium air curve fits; computation of hypersonic rarefield leading edge flows; computation of 2-D and 3-D blunt body laminar flows with an impinging shock; development of a two-dimensional or axisymmetric real gas blunt body code; a study of an over-relaxation procedure forthe MacCormack finite-difference scheme; computation of 2-D blunt body turbulent flows with an impinging shock; computation of supersonic viscous flow over delta wings at high angles of attack; and computation of the Space Shuttle Orbiter flowfield are discussed.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, Peyman; Madnia, C. K.; Steinberger, C. J.; Tsai, A.
1991-01-01
This research is involved with the implementations of advanced computational schemes based on large eddy simulations (LES) and direct numerical simulations (DNS) to study the phenomenon of mixing and its coupling with chemical reactions in compressible turbulent flows. In the efforts related to LES, a research program was initiated to extend the present capabilities of this method for the treatment of chemically reacting flows, whereas in the DNS efforts, focus was on detailed investigations of the effects of compressibility, heat release, and nonequilibrium kinetics modeling in high speed reacting flows. The efforts to date were primarily focussed on simulations of simple flows, namely, homogeneous compressible flows and temporally developing hign speed mixing layers. A summary of the accomplishments is provided.
Cryogenic, high speed, turbopump bearing cooling requirements
NASA Technical Reports Server (NTRS)
Dolan, Fred J.; Gibson, Howard G.; Cannon, James L.; Cody, Joe C.
1988-01-01
Although the Space Shuttle Main Engine (SSME) has repeatedly demonstrated the capability to perform during launch, the High Pressure Oxidizer Turbopump (HPOTP) main shaft bearings have not met their 7.5 hour life requirement. A tester is being employed to provide the capability of subjecting full scale bearings and seals to speeds, loads, propellants, temperatures, and pressures which simulate engine operating conditions. The tester design permits much more elaborate instrumentation and diagnostics than could be accommodated in an SSME turbopump. Tests were made to demonstrate the facilities; and the devices' capabilities, to verify the instruments in its operating environment and to establish a performance baseline for the flight type SSME HPOTP Turbine Bearing design. Bearing performance data from tests are being utilized to generate: (1) a high speed, cryogenic turbopump bearing computer mechanical model, and (2) a much improved, very detailed thermal model to better understand bearing internal operating conditions. Parametric tests were also made to determine the effects of speed, axial loads, coolant flow rate, and surface finish degradation on bearing performance.
Computer modeling design of a frame pier for a high-speed railway project
NASA Astrophysics Data System (ADS)
Shi, Jing-xian; Fan, Jiang
2018-03-01
In this paper, a double line pier on a high-speed railway in China is taken as an example. the size of each location is drawn up firstly. The design of pre-stressed steel beam for its crossbeam is carried out, and the configuration of ordinary reinforcement is carried out for concrete piers. Combined with bridge structure analysis software Midas Civil and BSAS, the frame pier is modeled and calculated. The results show that the beam and pier column section size reasonable design of pre-stressed steel beam with 17-7V5 high strength low relaxation steel strand, can meet the requirements of high speed railway carrying capacity; the main reinforcement of pier shaft with HRB400 diameter is 28mm, ring arranged around the pier, can satisfy the eccentric compression strength, stiffness and stability requirements, also meet the requirements of seismic design.
Solid-state NMR imaging system
Gopalsami, Nachappa; Dieckman, Stephen L.; Ellingson, William A.
1992-01-01
An apparatus for use with a solid-state NMR spectrometer includes a special imaging probe with linear, high-field strength gradient fields and high-power broadband RF coils using a back projection method for data acquisition and image reconstruction, and a real-time pulse programmer adaptable for use by a conventional computer for complex high speed pulse sequences.
Naval Research Laboratory Fact Book 2012
2012-11-01
Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion
Cyberdyn supercomputer - a tool for imaging geodinamic processes
NASA Astrophysics Data System (ADS)
Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita
2014-05-01
More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing intermediate-depth seismicity within the so-called Vrancea zone. The CFD code for numerical modelling is CitcomS, a widely employed open source package specifically developed for earth sciences. Several preliminary 3D geodynamic models for simulating an assumed subduction or the effect of a mantle plume will be presented and discussed.
Holographic memory for high-density data storage and high-speed pattern recognition
NASA Astrophysics Data System (ADS)
Gu, Claire
2002-09-01
As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.
A model predictive speed tracking control approach for autonomous ground vehicles
NASA Astrophysics Data System (ADS)
Zhu, Min; Chen, Huiyan; Xiong, Guangming
2017-03-01
This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Shin, Bangho; Kim, Chan Hyeong; Furuta, Takuya
2018-05-04
In this study, the multi-threading performance of the Geant4, MCNP6, and PHITS codes was evaluated as a function of the number of threads (N) and the complexity of the tetrahedral-mesh phantom. For this, three tetrahedral-mesh phantoms of varying complexity (simple, moderately complex, and highly complex) were prepared and implemented in the three different Monte Carlo codes, in photon and neutron transport simulations. Subsequently, for each case, the initialization time, calculation time, and memory usage were measured as a function of the number of threads used in the simulation. It was found that for all codes, the initialization time significantly increased with the complexity of the phantom, but not with the number of threads. Geant4 exhibited much longer initialization time than the other codes, especially for the complex phantom (MRCP). The improvement of computation speed due to the use of a multi-threaded code was calculated as the speed-up factor, the ratio of the computation speed on a multi-threaded code to the computation speed on a single-threaded code. Geant4 showed the best multi-threading performance among the codes considered in this study, with the speed-up factor almost linearly increasing with the number of threads, reaching ~30 when N = 40. PHITS and MCNP6 showed a much smaller increase of the speed-up factor with the number of threads. For PHITS, the speed-up factors were low when N = 40. For MCNP6, the increase of the speed-up factors was better, but they were still less than ~10 when N = 40. As for memory usage, Geant4 was found to use more memory than the other codes. In addition, compared to that of the other codes, the memory usage of Geant4 more rapidly increased with the number of threads, reaching as high as ~74 GB when N = 40 for the complex phantom (MRCP). It is notable that compared to that of the other codes, the memory usage of PHITS was much lower, regardless of both the complexity of the phantom and the number of threads, hardly increasing with the number of threads for the MRCP.
First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop
NASA Technical Reports Server (NTRS)
Wood, Richard M. (Editor)
1999-01-01
This publication is a compilation of documents presented at the First NASA/Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representative from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.
The numerical simulation of a high-speed axial flow compressor
NASA Technical Reports Server (NTRS)
Mulac, Richard A.; Adamczyk, John J.
1991-01-01
The advancement of high-speed axial-flow multistage compressors is impeded by a lack of detailed flow-field information. Recent development in compressor flow modeling and numerical simulation have the potential to provide needed information in a timely manner. The development of a computer program is described to solve the viscous form of the average-passage equation system for multistage turbomachinery. Programming issues such as in-core versus out-of-core data storage and CPU utilization (parallelization, vectorization, and chaining) are addressed. Code performance is evaluated through the simulation of the first four stages of a five-stage, high-speed, axial-flow compressor. The second part addresses the flow physics which can be obtained from the numerical simulation. In particular, an examination of the endwall flow structure is made, and its impact on blockage distribution assessed.
Light Weight MP3 Watermarking Method for Mobile Terminals
NASA Astrophysics Data System (ADS)
Takagi, Koichi; Sakazawa, Shigeyuki; Takishima, Yasuhiro
This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.
First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop. Pt. 2
NASA Technical Reports Server (NTRS)
Wood, Richard M. (Editor)
1999-01-01
This publication is a compilation of documents presented at the First NASA Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representatives from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.
First NASA/Industry High-Speed Research Configuration Aerodynamics Workshop. Part 1
NASA Technical Reports Server (NTRS)
Wood, Richard M. (Editor)
1999-01-01
This publication is a compilation of documents presented at the First NASA/Industry High Speed Research Configuration Aerodynamics Workshop held on February 27-29, 1996 at NASA Langley Research Center. The purpose of the workshop was to bring together the broad spectrum of aerodynamicists, engineers, and scientists working within the Configuration Aerodynamics element of the HSR Program to collectively evaluate the technology status and to define the needs within Computational Fluid Dynamics (CFD) Analysis Methodology, Aerodynamic Shape Design, Propulsion/Airframe Integration (PAI), Aerodynamic Performance, and Stability and Control (S&C) to support the development of an economically viable High Speed Civil Transport (HSCT) aircraft. To meet these objectives, papers were presented by representative from NASA Langley, Ames, and Lewis Research Centers; Boeing, McDonnell Douglas, Northrop-Grumman, Lockheed-Martin, Vigyan, Analytical Services, Dynacs, and RIACS.
Computational Analyses of the LIMX TBCC Inlet High-Speed Flowpath
NASA Technical Reports Server (NTRS)
Dippold, Vance F., III
2012-01-01
Reynolds-Averaged Navier-Stokes (RANS) simulations were performed for the high-speed flowpath and isolator of a dual-flowpath Turbine-Based Combined-Cycle (TBCC) inlet using the Wind-US code. The RANS simulations were performed in preparation for the Large-scale Inlet for Mode Transition (LIMX) model tests in the NASA Glenn Research Center (GRC) 10- by 10-ft Supersonic Wind Tunnel. The LIMX inlet has a low-speed flowpath that is coupled to a turbine engine and a high-speed flowpath designed to be coupled to a Dual-Mode Scramjet (DMSJ) combustor. These RANS simulations were conducted at a simulated freestream Mach number of 4.0, which is the nominal Mach number for the planned wind tunnel testing with the LIMX model. For the simulation results presented in this paper, the back pressure, cowl angles, and freestream Mach number were each varied to assess the performance and robustness of the high-speed inlet and isolator. Under simulated wind tunnel conditions at maximum inlet mass flow rates, the high-speed flowpath pressure rise was found to be greater than a factor of four. Furthermore, at a simulated freestream Mach number of 4.0, the high-speed flowpath and isolator showed stability for freestream Mach number that drops 0.1 Mach below the design point. The RANS simulations indicate the yet-untested highspeed inlet and isolator flowpath should operate as designed. The RANS simulation results also provided important insight to researchers as they developed test plans for the LIMX experiment in GRC s 10- by 10-ft Supersonic Wind Tunnel.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
High-speed prediction of crystal structures for organic molecules
NASA Astrophysics Data System (ADS)
Obata, Shigeaki; Goto, Hitoshi
2015-02-01
We developed a master-worker type parallel algorithm for allocating tasks of crystal structure optimizations to distributed compute nodes, in order to improve a performance of simulations for crystal structure predictions. The performance experiments were demonstrated on TUT-ADSIM supercomputer system (HITACHI HA8000-tc/HT210). The experimental results show that our parallel algorithm could achieve speed-ups of 214 and 179 times using 256 processor cores on crystal structure optimizations in predictions of crystal structures for 3-aza-bicyclo(3.3.1)nonane-2,4-dione and 2-diazo-3,5-cyclohexadiene-1-one, respectively. We expect that this parallel algorithm is always possible to reduce computational costs of any crystal structure predictions.
Radio Synthesis Imaging - A High Performance Computing and Communications Project
NASA Astrophysics Data System (ADS)
Crutcher, Richard M.
The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.
NASA Technical Reports Server (NTRS)
Walton, William C., Jr.
1960-01-01
This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.
NASA Astrophysics Data System (ADS)
Watanabe, Koji; Matsuno, Kenichi
This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.
Computational Structures Technology for Airframes and Propulsion Systems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K. (Compiler); Housner, Jerrold M. (Compiler); Starnes, James H., Jr. (Compiler); Hopkins, Dale A. (Compiler); Chamis, Christos C. (Compiler)
1992-01-01
This conference publication contains the presentations and discussions from the joint University of Virginia (UVA)/NASA Workshops. The presentations included NASA Headquarters perspectives on High Speed Civil Transport (HSCT), goals and objectives of the UVA Center for Computational Structures Technology (CST), NASA and Air Force CST activities, CST activities for airframes and propulsion systems in industry, and CST activities at Sandia National Laboratory.
Special-purpose computer for holography HORN-2
NASA Astrophysics Data System (ADS)
Ito, Tomoyoshi; Eldeib, Hesham; Yoshida, Kenji; Takahashi, Shinya; Yabe, Takashi; Kunugi, Tomoaki
1996-01-01
We designed and built a special-purpose computer for holography, HORN-2 (HOlographic ReconstructioN). HORN-2 calculates light intensity at high speed of 0.3 Gflops per one board with single (32-bit floating point) precision. The cost of the board is 500 000 Japanese yen (5000 US dollar). We made three boards. Operating them in parallel, we get about 1 Gflops.
High-Speed Recording of Test Data on Hard Disks
NASA Technical Reports Server (NTRS)
Lagarde, Paul M., Jr.; Newnan, Bruce
2003-01-01
Disk Recording System (DRS) is a systems-integration computer program for a direct-to-disk (DTD) high-speed data acquisition system (HDAS) that records rocket-engine test data. The HDAS consists partly of equipment originally designed for recording the data on tapes. The tape recorders were replaced with hard-disk drives, necessitating the development of DRS to provide an operating environment that ties two computers, a set of five DTD recorders, and signal-processing circuits from the original tape-recording version of the HDAS into one working system. DRS includes three subsystems: (1) one that generates a graphical user interface (GUI), on one of the computers, that serves as a main control panel; (2) one that generates a GUI, on the other computer, that serves as a remote control panel; and (3) a data-processing subsystem that performs tasks on the DTD recorders according to instructions sent from the main control panel. The software affords capabilities for dynamic configuration to record single or multiple channels from a remote source, remote starting and stopping of the recorders, indexing to prevent overwriting of data, and production of filtered frequency data from an original time-series data file.
Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu
2017-12-01
Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag 5 In 5 Sb 60 Te 30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.
NASA Astrophysics Data System (ADS)
Shukla, Krishna Dayal; Saxena, Nishant; Manivannan, Anbarasu
2017-12-01
Recent advancements in commercialization of high-speed non-volatile electronic memories including phase change memory (PCM) have shown potential not only for advanced data storage but also for novel computing concepts. However, an in-depth understanding on ultrafast electrical switching dynamics is a key challenge for defining the ultimate speed of nanoscale memory devices that demands for an unconventional electrical setup, specifically capable of handling extremely fast electrical pulses. In the present work, an ultrafast programmable electrical tester (PET) setup has been developed exceptionally for unravelling time-resolved electrical switching dynamics and programming characteristics of nanoscale memory devices at the picosecond (ps) time scale. This setup consists of novel high-frequency contact-boards carefully designed to capture extremely fast switching transient characteristics within 200 ± 25 ps using time-resolved current-voltage measurements. All the instruments in the system are synchronized using LabVIEW, which helps to achieve various programming characteristics such as voltage-dependent transient parameters, read/write operations, and endurance test of memory devices systematically using short voltage pulses having pulse parameters varied from 1 ns rise/fall time and 1.5 ns pulse width (full width half maximum). Furthermore, the setup has successfully demonstrated strikingly one order faster switching characteristics of Ag5In5Sb60Te30 (AIST) PCM devices within 250 ps. Hence, this novel electrical setup would be immensely helpful for realizing the ultimate speed limits of various high-speed memory technologies for future computing.
Fluid flow dynamics in MAS systems
NASA Astrophysics Data System (ADS)
Wilhelm, Dirk; Purea, Armin; Engelke, Frank
2015-08-01
The turbine system and the radial bearing of a high performance magic angle spinning (MAS) probe with 1.3 mm-rotor diameter has been analyzed for spinning rates up to 67 kHz. We focused mainly on the fluid flow properties of the MAS system. Therefore, computational fluid dynamics (CFD) simulations and fluid measurements of the turbine and the radial bearings have been performed. CFD simulation and measurement results of the 1.3 mm-MAS rotor system show relatively low efficiency (about 25%) compared to standard turbo machines outside the realm of MAS. However, in particular, MAS turbines are mainly optimized for speed and stability instead of efficiency. We have compared MAS systems for rotor diameter of 1.3-7 mm converted to dimensionless values with classical turbomachinery systems showing that the operation parameters (rotor diameter, inlet mass flow, spinning rate) are in the favorable range. This dimensionless analysis also supports radial turbines for low speed MAS probes and diagonal turbines for high speed MAS probes. Consequently, a change from Pelton type MAS turbines to diagonal turbines might be worth considering for high speed applications. CFD simulations of the radial bearings have been compared with basic theoretical values proposing considerably smaller frictional loss values. The discrepancies might be due to the simple linear flow profile employed for the theoretical model. Frictional losses generated inside the radial bearings result in undesired heat-up of the rotor. The rotor surface temperature distribution computed by CFD simulations show a large temperature gradient over the rotor.
Sonic boom predictions using a modified Euler code
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1992-01-01
The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.
NASA Technical Reports Server (NTRS)
Tan, Choon-Sooi; Suder, Kenneth (Technical Monitor)
2003-01-01
A framework for an effective computational methodology for characterizing the stability and the impact of distortion in high-speed multi-stage compressor is being developed. The methodology consists of using a few isolated-blade row Navier-Stokes solutions for each blade row to construct a body force database. The purpose of the body force database is to replace each blade row in a multi-stage compressor by a body force distribution to produce same pressure rise and flow turning. To do this, each body force database is generated in such a way that it can respond to the changes in local flow conditions. Once the database is generated, no hrther Navier-Stokes computations are necessary. The process is repeated for every blade row in the multi-stage compressor. The body forces are then embedded as source terms in an Euler solver. The method is developed to have the capability to compute the performance in a flow that has radial as well as circumferential non-uniformity with a length scale larger than a blade pitch; thus it can potentially be used to characterize the stability of a compressor under design. It is these two latter features as well as the accompanying procedure to obtain the body force representation that distinguish the present methodology from the streamline curvature method. The overall computational procedures have been developed. A dimensional analysis was carried out to determine the local flow conditions for parameterizing the magnitudes of the local body force representation of blade rows. An Euler solver was modified to embed the body forces as source terms. The results from the dimensional analysis show that the body forces can be parameterized in terms of the two relative flow angles, the relative Mach number, and the Reynolds number. For flow in a high-speed transonic blade row, they can be parameterized in terms of the local relative Mach number alone.
High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs
NASA Technical Reports Server (NTRS)
Whitfield, C. E.; Mani, R.; Gliebe, P. R.
1990-01-01
The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.
High speed turboprop aeroacoustic study (counterrotation). Volume 2: Computer programs
NASA Astrophysics Data System (ADS)
Whitfield, C. E.; Mani, R.; Gliebe, P. R.
1990-07-01
The isolated counterrotating high speed turboprop noise prediction program developed and funded by GE Aircraft Engines was compared with model data taken in the GE Aircraft Engines Cell 41 anechoic facility, the Boeing Transonic Wind Tunnel, and in the NASA-Lewis 8 x 6 and 9 x 15 wind tunnels. The predictions show good agreement with measured data under both low and high speed simulated flight conditions. The installation effect model developed for single rotation, high speed turboprops was extended to include counter rotation. The additional effect of mounting a pylon upstream of the forward rotor was included in the flow field modeling. A nontraditional mechanism concerning the acoustic radiation from a propeller at angle of attack was investigated. Predictions made using this approach show results that are in much closer agreement with measurement over a range of operating conditions than those obtained via traditional fluctuating force methods. The isolated rotors and installation effects models were combined into a single prediction program. The results were compared with data taken during the flight test of the B727/UDF (trademark) engine demonstrator aircraft.
Reynolds Number Effects on a Supersonic Transport at Subsonic High-Lift Conditions (Invited)
NASA Technical Reports Server (NTRS)
Owens, L.R.; Wahls, R. A.
2001-01-01
A High Speed Civil Transport configuration was tested in the National Transonic Facility at the NASA Langley Research Center as part of NASA's High Speed Research Program. The primary purposes of the tests were to assess Reynolds number scale effects and high Reynolds number aerodynamic characteristics of a realistic, second generation supersonic transport while providing data for the assessment of computational methods. The tests included longitudinal and lateral/directional studies at transonic and low-speed, high-lift conditions across a range of Reynolds numbers from that available in conventional wind tunnels to near flight conditions. Results are presented which focus on Reynolds number and static aeroelastic sensitivities of longitudinal characteristics at Mach 0.30 for a configuration without an empennage. A fundamental change in flow-state occurred between Reynolds numbers of 30 to 40 million, which is characterized by significantly earlier inboard leading-edge separation at the high Reynolds numbers. Force and moment levels change but Reynolds number trends are consistent between the two states.
Compact high-speed scanning lidar system
NASA Astrophysics Data System (ADS)
Dickinson, Cameron; Hussein, Marwan; Tripp, Jeff; Nimelman, Manny; Koujelev, Alexander
2012-06-01
The compact High Speed Scanning Lidar (HSSL) was designed to meet the requirements for a rover GN&C sensor. The eye-safe HSSL's fast scanning speed, low volume and low power, make it the ideal choice for a variety of real-time and non-real-time applications including: 3D Mapping; Vehicle guidance and Navigation; Obstacle Detection; Orbiter Rendezvous; Spacecraft Landing / Hazard Avoidance. The HSSL comprises two main hardware units: Sensor Head and Control Unit. In a rover application, the Sensor Head mounts on the top of the rover while the Control Unit can be mounted on the rover deck or within its avionics bay. An Operator Computer is used to command the lidar and immediately display the acquired scan data. The innovative lidar design concept was a result of an extensive trade study conducted during the initial phase of an exploration rover program. The lidar utilizes an innovative scanner coupled with a compact fiber laser and high-speed timing electronics. Compared to existing compact lidar systems, distinguishing features of the HSSL include its high accuracy, high resolution, high refresh rate and large field of view. Other benefits of this design include the capability to quickly configure scan settings to fit various operational modes.
Making Cloud Computing Available For Researchers and Innovators (Invited)
NASA Astrophysics Data System (ADS)
Winsor, R.
2010-12-01
High Performance Computing (HPC) facilities exist in most academic institutions but are almost invariably over-subscribed. Access is allocated based on academic merit, the only practical method of assigning valuable finite compute resources. Cloud computing on the other hand, and particularly commercial clouds, draw flexibly on an almost limitless resource as long as the user has sufficient funds to pay the bill. How can the commercial cloud model be applied to scientific computing? Is there a case to be made for a publicly available research cloud and how would it be structured? This talk will explore these themes and describe how Cybera, a not-for-profit non-governmental organization in Alberta Canada, aims to leverage its high speed research and education network to provide cloud computing facilities for a much wider user base.
NASA Astrophysics Data System (ADS)
Crowell, Andrew Rippetoe
This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two-pronged approach is found to exhibit balanced performance in terms of accuracy and computational expense, relative to several existing approaches. This approach enables CFD-based loads to be implemented into long duration fluid-thermal-structural simulations.
Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.
2010-01-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792
Cellulose Nanofiber Composite Substrates for Flexible Electronics
Ronald Sabo; Jung-Hun Seo; Zhenqiang Ma
2012-01-01
Flexible electronics have a large number of potential applications including malleable displays and wearable computers. The current research into high-speed, flexible electronic substrates employs the use of plastics for the flexible substrate, but these plastics typically have drawbacks, such as high thermal expansion coefficients. Transparent films made from...
Chapter 2.3 Cellulose Nanofibril Composite Substrates for Flexible Electronics
Ronald Sabo; Jung-Hun Seo; Zhenqiang Ma
2013-01-01
Flexible electronics have a large number of potential applications, including malleable displays and wearable computers. Current research into high-speed, flexible electronic substrates uses plastics for the flexible substrate, but these plastics typically have drawbacks, such as high thermal expansion coefficients. Transparent films made from cellulose...
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
On the proper use of the reduced speed of light approximation
Gnedin, Nickolay Y.
2016-12-07
I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less
On the proper use of the reduced speed of light approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.
I show that the Reduced Speed of Light (RSL) approximation, when used properly (i.e. as originally designed - only for the local sources but not for the cosmic background), remains a highly accurate numerical method for modeling cosmic reionization. Simulated ionization and star formation histories from the "Cosmic Reionization On Computers" (CROC) project are insensitive to the adopted value of the reduced speed of light for as long as that value does not fall below about 10% of the true speed of light. Here, a recent claim of the failure of the RSL approximation in the Illustris reionization model appearsmore » to be due to the effective speed of light being reduced in the equation for the cosmic background too, and, hence, illustrates the importance of maintaining the correct speed of light in modeling the cosmic background.« less
Numerical Simulation of the Oscillations in a Mixer: An Internal Aeroacoustic Feedback System
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Loh, Ching Y.
2004-01-01
The space-time conservation element and solution element method is employed to numerically study the acoustic feedback system in a high temperature, high speed wind tunnel mixer. The computation captures the self-sustained feedback loop between reflecting Mach waves and the shear layer. This feedback loop results in violent instabilities that are suspected of causing damage to some tunnel components. The computed frequency is in good agreement with the available experimental data. The physical phenomena are explained based on the numerical results.
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
Parallel computing in experimental mechanics and optical measurement: A review (II)
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Kemao, Qian
2018-05-01
With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.
Evolving binary classifiers through parallel computation of multiple fitness cases.
Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni
2005-06-01
This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Bibliography on Ground Vehicle Communications & Control : A KWIC Index
DOT National Transportation Integrated Search
1971-08-01
This bibliography, or the subject of communication and control of ground vehicles, covers the fields of land-mobile communication, computer-aided traffic control, communication with high speed ground vehicles and radio frequency noise. Emphasis is pl...
Big data analytics to aid developing livable communities.
DOT National Transportation Integrated Search
2015-12-31
In transportation, ubiquitous deployment of low-cost sensors combined with powerful : computer hardware and high-speed network makes big data available. USDOT defines big : data research in transportation as a number of advanced techniques applied to...
28-Bit serial word simulator/monitor
NASA Technical Reports Server (NTRS)
Durbin, J. W.
1979-01-01
Modular interface unit transfers data at high speeds along four channels. Device expedites variable-word-length communication between computers. Operation eases exchange of bit information by automatically reformatting coded input data and status information to match requirements of output.
Theoretical and experimental comparison of vapor cavitation in dynamically loaded journal bearings
NASA Astrophysics Data System (ADS)
Brewe, D. E.; Hamrock, B. J.; Jacobson, B. A.
Vapor cavitation for a submerged journal bearing under dynamically loaded conditions was investigated. The observation of vapor cavitation in the laboratory was done by high-speed photography. It was found that vapor cavitation occurs when the tensile stress applied to the oil exceeded the tensile strength of the oil or the binding of the oil to the surface. The theoretical solution to the Reynolds equation is determined numerically using a moving boundary algorithm. This algorithm conserves mass throughout the computational domain including the region of cavitation and its boundaries. An alternating direction implicit (MDI) method is used to effect the time march. A rotor undergoing circular whirl was studied. Predicted cavitation behavior was analyzed by three-dimensional computer graphic movies. The formation, growth, and collapse of the bubble in response to the dynamic conditions is shown. For the same conditions of dynamic loading, the cavitation bubble was studied in the laboratory using high-speed photography.
Advanced Aerodynamic Design of Passive Porosity Control Effectors
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Viken, Sally A.; Wood, Richard M.; Bauer, Steven X. S.
2001-01-01
This paper describes aerodynamic design work aimed at developing a passive porosity control effector system for a generic tailless fighter aircraft. As part of this work, a computational design tool was developed and used to layout passive porosity effector systems for longitudinal and lateral-directional control at a low-speed, high angle of attack condition. Aerodynamic analysis was conducted using the NASA Langley computational fluid dynamics code USM3D, in conjunction with a newly formulated surface boundary condition for passive porosity. Results indicate that passive porosity effectors can provide maneuver control increments that equal and exceed those of conventional aerodynamic effectors for low-speed, high-alpha flight, with control levels that are a linear function of porous area. This work demonstrates the tremendous potential of passive porosity to yield simple control effector systems that have no external moving parts and will preserve an aircraft's fixed outer mold line.
High speed civil transport aerodynamic optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1994-01-01
This is a report of work in support of the Computational Aerosciences (CAS) element of the Federal HPCC program. Specifically, CFD and aerodynamic optimization are being performed on parallel computers. The long-range goal of this work is to facilitate teraflops-rate multidisciplinary optimization of aerospace vehicles. This year's work is targeted for application to the High Speed Civil Transport (HSCT), one of four CAS grand challenges identified in the HPCC FY 1995 Blue Book. This vehicle is to be a passenger aircraft, with the promise of cutting overseas flight time by more than half. To meet fuel economy, operational costs, environmental impact, noise production, and range requirements, improved design tools are required, and these tools must eventually integrate optimization, external aerodynamics, propulsion, structures, heat transfer, controls, and perhaps other disciplines. The fundamental goal of this project is to contribute to improved design tools for U.S. industry, and thus to the nation's economic competitiveness.
Theoretical and experimental comparison of vapor cavitation in dynamically loaded journal bearings
NASA Technical Reports Server (NTRS)
Brewe, D. E.; Hamrock, B. J.; Jacobson, B. A.
1985-01-01
Vapor cavitation for a submerged journal bearing under dynamically loaded conditions was investigated. The observation of vapor cavitation in the laboratory was done by high-speed photography. It was found that vapor cavitation occurs when the tensile stress applied to the oil exceeded the tensile strength of the oil or the binding of the oil to the surface. The theoretical solution to the Reynolds equation is determined numerically using a moving boundary algorithm. This algorithm conserves mass throughout the computational domain including the region of cavitation and its boundaries. An alternating direction implicit (MDI) method is used to effect the time march. A rotor undergoing circular whirl was studied. Predicted cavitation behavior was analyzed by three-dimensional computer graphic movies. The formation, growth, and collapse of the bubble in response to the dynamic conditions is shown. For the same conditions of dynamic loading, the cavitation bubble was studied in the laboratory using high-speed photography.
NASA Technical Reports Server (NTRS)
Hung, R. J.; Tsao, Y. D.; Leslie, Fred W.; Hong, B. B.
1988-01-01
Time dependent evolutions of the profile of free surface (bubble shapes) for a cylindrical container partially filled with a Newtonian fluid of constant density, rotating about its axis of symmetry, have been studied. Numerical computations of the dynamics of bubble shapes have been carried out with the following situations: (1) linear functions of spin-up and spin-down in low and microgravity environments, (2) linear functions of increasing and decreasing gravity enviroment in high and low rotating cylidner speeds, (3) step functions of spin-up and spin-down in a low gravity environment, and (4) sinusoidal function oscillation of gravity environment in high and low rotating cylinder speeds. The initial condition of bubble profiles was adopted from the steady-state formulations in which the computer algorithms have been developed by Hung and Leslie (1988), and Hung et al. (1988).
Application of CFD to the analysis and design of high-speed inlets
NASA Technical Reports Server (NTRS)
Rose, William C.
1995-01-01
Over the past seven years, efforts under the present Grant have been aimed at being able to apply modern Computational Fluid Dynamics to the design of high-speed engine inlets. In this report, a review of previous design capabilities (prior to the advent of functioning CFD) was presented and the example of the NASA 'Mach 5 inlet' design was given as the premier example of the historical approach to inlet design. The philosophy used in the Mach 5 inlet design was carried forward in the present study, in which CFD was used to design a new Mach 10 inlet. An example of an inlet redesign was also shown. These latter efforts were carried out using today's state-of-the-art, full computational fluid dynamics codes applied in an iterative man-in-the-loop technique. The potential usefulness of an automated machine design capability using an optimizer code was also discussed.
Validation of a RANS transition model using a high-order weighted compact nonlinear scheme
NASA Astrophysics Data System (ADS)
Tu, GuoHua; Deng, XiaoGang; Mao, MeiLiang
2013-04-01
A modified transition model is given based on the shear stress transport (SST) turbulence model and an intermittency transport equation. The energy gradient term in the original model is replaced by flow strain rate to saving computational costs. The model employs local variables only, and then it can be conveniently implemented in modern computational fluid dynamics codes. The fifth-order weighted compact nonlinear scheme and the fourth-order staggered scheme are applied to discrete the governing equations for the purpose of minimizing discretization errors, so as to mitigate the confusion between numerical errors and transition model errors. The high-order package is compared with a second-order TVD method on simulating the transitional flow of a flat plate. Numerical results indicate that the high-order package give better grid convergence property than that of the second-order method. Validation of the transition model is performed for transitional flows ranging from low speed to hypersonic speed.
Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure
NASA Astrophysics Data System (ADS)
Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei
2011-09-01
Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.
Precision electronic speed controller for an alternating-current motor
Bolie, V.W.
A high precision controller for an alternating-current multi-phase electrical motor that is subject to a large inertial load. The controller was developed for controlling, in a neutron chopper system, a heavy spinning rotor that must be rotated in phase-locked synchronism with a reference pulse train that is representative of an ac power supply signal having a meandering line frequency. The controller includes a shaft revolution sensor which provides a feedback pulse train representative of the actual speed of the motor. An internal digital timing signal generator provides a reference signal which is compared with the feedback signal in a computing unit to provide a motor control signal. The motor control signal is a weighted linear sum of a speed error voltage, a phase error voltage, and a drift error voltage, each of which is computed anew with each revolution of the motor shaft. The speed error signal is generated by a novel vernier-logic circuit which is drift-free and highly sensitive to small speed changes. The phase error is also computed by digital logic, with adjustable sensitivity around a 0 mid-scale value. The drift error signal, generated by long-term counting of the phase error, is used to compensate for any slow changes in the average friction drag on the motor. An auxillary drift-byte status sensor prevents any disruptive overflow or underflow of the drift-error counter. An adjustable clocked-delay unit is inserted between the controller and the source of the reference pulse train to permit phase alignment of the rotor to any desired offset angle. The stator windings of the motor are driven by two amplifiers which are provided with input signals having the proper quadrature relationship by an exciter unit consisting of a voltage controlled oscillator, a binary counter, a pair of read-only memories, and a pair of digital-to-analog converters.
High-Speed Research: Sonic Boom, volume 2
NASA Technical Reports Server (NTRS)
Darden, Christine M. (Compiler)
1992-01-01
A High-Speed Sonic Boom Workshop was held at NASA Langley Research Center on February 25-27, 1992. The purpose of the workshop was to make presentations on current research activities and accomplishments and to assess progress in the area of sonic boom since the program was initiated in FY-90. Twenty-nine papers were presented during the 2-1/2 day workshop. Attendees included representatives from academia, industry, and government who are actively involved in sonic-boom research. Volume 2 contains papers related to low sonic-boom design and analysis using both linear theory and higher order computational fluid dynamics (CFD) methods.
High-Speed Observer: Automated Streak Detection in SSME Plumes
NASA Technical Reports Server (NTRS)
Rieckoff, T. J.; Covan, M.; OFarrell, J. M.
2001-01-01
A high frame rate digital video camera installed on test stands at Stennis Space Center has been used to capture images of Space Shuttle main engine plumes during test. These plume images are processed in real time to detect and differentiate anomalous plume events occurring during a time interval on the order of 5 msec. Such speed yields near instantaneous availability of information concerning the state of the hardware. This information can be monitored by the test conductor or by other computer systems, such as the integrated health monitoring system processors, for possible test shutdown before occurrence of a catastrophic engine failure.
LES, DNS and RANS for the analysis of high-speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Adumitroaie, V.; Colucci, P. J.; Taulbee, D. B.; Givi, P.
1995-01-01
The purpose of this research is to continue our efforts in advancing the state of knowledge in large eddy simulation (LES), direct numerical simulation (DNS), and Reynolds averaged Navier Stokes (RANS) methods for the computational analysis of high-speed reacting turbulent flows. In the second phase of this work, covering the period 1 Aug. 1994 - 31 Jul. 1995, we have focused our efforts on two programs: (1) developments of explicit algebraic moment closures for statistical descriptions of compressible reacting flows and (2) development of Monte Carlo numerical methods for LES of chemically reacting flows.
Alpha Control - A new Concept in SPM Control
NASA Astrophysics Data System (ADS)
Spizig, P.; Sanchen, D.; Volswinkler, G.; Ibach, W.; Koenen, J.
2006-03-01
Controlling modern Scanning Probe Microscopes demands highly sophisticated electronics. While flexibility and powerful computing power is of great importance in facilitating the variety of measurement modes, extremely low noise is also a necessity. Accordingly, modern SPM Controller designs are based on digital electronics to overcome the drawbacks of analog designs. While todays SPM controllers are based on DSPs or Microprocessors and often still incorporate analog parts, we are now introducing a completely new approach: Using a Field Programmable Gate Array (FPGA) to implement the digital control tasks allows unrivalled data processing speed by computing all tasks in parallel within a single chip. Time consuming task switching between data acquisition, digital filtering, scanning and the computing of feedback signals can be completely avoided. Together with a star topology to avoid any bus limitations in accessing the variety of ADCs and DACs, this design guarantees for the first time an entirely deterministic timing capability in the nanosecond regime for all tasks. This becomes especially useful for any external experiments which must be synchronized with the scan or for high speed scans that require not only closed loop control of the scanner, but also dynamic correction of the scan movement. Delicate samples additionally benefit from extremely high sample rates, allowing highly resolved signals and low noise levels.
Numerical simulation of liquid jet impact on a rigid wall
NASA Astrophysics Data System (ADS)
Aganin, A. A.; Guseva, T. S.
2016-11-01
Basic points of a numerical technique for computing high-speed liquid jet impact on a rigid wall are presented. In the technique the flows of the liquid and the surrounding gas are governed by the equations of gas dynamics in the density, velocity, and pressure, which are integrated by the CIP-CUP method on dynamically adaptive grids without explicitly tracking the gas-liquid interface. The efficiency of the technique is demonstrated by the results of computing the problems of impact of the liquid cone and the liquid wedge on a wall in the mode with the shockwave touching the wall by its edge. Numerical solutions of these problems are compared with the analytical solution of the problem of impact of the plane liquid flow on a wall. Applicability of the technique to the problems of the high-speed liquid jet impact on a wall is illustrated by the results of computing a problem of impact of a cylindrical liquid jet with the hemispherical end on a wall covered by a layer of the same liquid.
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Rotor design of high tip speed low loading transonic fan.
NASA Technical Reports Server (NTRS)
Erwin, J. R.; Vitale, N. G.
1972-01-01
This paper describes the design concepts, principles and details of a high tip speed transonic rotor having low aerodynamic loading. The purpose of the NASA sponsored investigation was to determine whether good efficiency and large stall margin could be obtained by designing a rotor to avoid flow separation associated with strong normal shocks. Fully supersonic flow through the outboard region of the rotor with compression accomplished by weak oblique shocks were major design concepts employed. Computer programs were written and used to derive blade sections consistent from the all-supersonic tip region to the all-subsonic hub region. Preliminary test results indicate attainment of design pressure ratio and design flow at design speed with about a 1.6 point decrement in efficiency and large stall margin.
High-speed GPU-based finite element simulations for NDT
NASA Astrophysics Data System (ADS)
Huthwaite, P.; Shi, F.; Van Pamel, A.; Lowe, M. J. S.
2015-03-01
The finite element method solved with explicit time increments is a general approach which can be applied to many ultrasound problems. It is widely used as a powerful tool within NDE for developing and testing inspection techniques, and can also be used in inversion processes. However, the solution technique is computationally intensive, requiring many calculations to be performed for each simulation, so traditionally speed has been an issue. For maximum speed, an implementation of the method, called Pogo [Huthwaite, J. Comp. Phys. 2014, doi: 10.1016/j.jcp.2013.10.017], has been developed to run on graphics cards, exploiting the highly parallelisable nature of the algorithm. Pogo typically demonstrates speed improvements of 60-90x over commercial CPU alternatives. Pogo is applied to three NDE examples, where the speed improvements are important: guided wave tomography, where a full 3D simulation must be run for each source transducer and every different defect size; scattering from rough cracks, where many simulations need to be run to build up a statistical model of the behaviour; and ultrasound propagation within coarse-grained materials where the mesh must be highly refined and many different cases run.
Some aspects of the aeroacoustics of high-speed jets
NASA Technical Reports Server (NTRS)
Lighthill, James
1993-01-01
Some of the background to contemporary jet aeroacoustics is addressed. Then scaling laws for noise generation by low-Mach-number airflows and by turbulence convected at 'not so low' Mach number is reviewed. These laws take into account the influence of Doppler effects associated with the convection of aeroacoustic sources. Next, a uniformly valid Doppler-effect approximation exhibits the transition, with increasing Mach number of convection, from compact-source radiation at low Mach numbers to a statistical assemblage of conical shock waves radiated by eddies convected at supersonic speed. In jets, for example, supersonic eddy convection is typically found for jet exit speeds exceeding twice the atmospheric speed of sound. The Lecture continues by describing a new dynamical theory of the nonlinear propagation of such statistically random assemblages of conical shock waves. It is shown, both by a general theoretical analysis and by an illustrative computational study, how their propagation is dominated by a characteristic 'bunching' process. That process associated with a tendency for shock waves that have already formed unions with other shock waves to acquire an increased proneness to form further unions - acts so as to enhance the high-frequency part of the spectrum of noise emission from jets at these high exit speeds.
1984-09-27
more effectively structured and transportable simulation program modules and powerful support software, are already in place for current use. The early...incorporates the various limits and conditions described for the major acceleration categories. (14) Speed Loop This module Is executed when the shaft speed...available, high confidence models and modules . A great leverage is gained by using generally available general purpose computers and associated support
On the generalized VIP time integral methodology for transient thermal problems
NASA Technical Reports Server (NTRS)
Mei, Youping; Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong
1993-01-01
The paper describes the development and applicability of a generalized VIrtual-Pulse (VIP) time integral method of computation for thermal problems. Unlike past approaches for general heat transfer computations, and with the advent of high speed computing technology and the importance of parallel computations for efficient use of computing environments, a major motivation via the developments described in this paper is the need for developing explicit computational procedures with improved accuracy and stability characteristics. As a consequence, a new and effective VIP methodology is described which inherits these improved characteristics. Numerical illustrative examples are provided to demonstrate the developments and validate the results obtained for thermal problems.
NASA Technical Reports Server (NTRS)
Byrne, F. (Inventor)
1981-01-01
A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Parallelization of the Coupled Earthquake Model
NASA Technical Reports Server (NTRS)
Block, Gary; Li, P. Peggy; Song, Yuhe T.
2007-01-01
This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.
Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.
Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk
2009-07-01
For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.
Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging
Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk
2009-01-01
For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160
Rieucau, Guillaume; Burke, Darren
2017-01-01
Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965
Berker, Yannick; Karp, Joel S; Schulz, Volkmar
2017-09-01
The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.
High Speed Oblivious Random Access Memory (HS-ORAM)
2015-09-01
Bryan Parno, “Non-interactive verifiable computing: Outsourcing computation to untrusted workers”, 30th International Cryptology Conference, pp. 465...holder or any other person or corporation; or convey any rights or permission to manufacture , use, or sell any patented invention that may relate to...secure outsourced data access protocols. HS-ORAM deploys a number of server- side software components running inside tamper-proof secure coprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busbey, A.B.
Seismic Processing Workshop, a program by Parallel Geosciences of Austin, TX, is discussed in this column. The program is a high-speed, interactive seismic processing and computer analysis system for the Apple Macintosh II family of computers. Also reviewed in this column are three products from Wilkerson Associates of Champaign, IL. SubSide is an interactive program for basin subsidence analysis; MacFault and MacThrustRamp are programs for modeling faults.
NASA Technical Reports Server (NTRS)
Gibson, Jim; Jordan, Joe; Grant, Terry
1990-01-01
Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.
High Speed Internet Access Using Cellular Infrastructure
2004-09-01
62 Figure 50 Initial Connection from Laptop to Mobile Phone via Bluetooth . ....................64 Figure 51 Mobile Phone Discovered by the Laptop...tailor them to their needs. Other hardware improvements include Infrared and Bluetooth capabilities for external connectivity with a computer, as well...mobile phone can communicate with a computer through a phone’s vendor specific cable, Infrared or Bluetooth . The latter two depend on both devices
A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks
NASA Technical Reports Server (NTRS)
Cui, Zhenqian
1999-01-01
In this thesis, we analyze various factors that affect quality of service (QoS) communication in high-speed, packet-switching sub-networks. We hypothesize that sub-network-wide bandwidth reservation and guaranteed CPU processing power at endpoint systems for handling data traffic are indispensable to achieving hard end-to-end quality of service. Different bandwidth reservation strategies, traffic characterization schemes, and scheduling algorithms affect the network resources and CPU usage as well as the extent that QoS can be achieved. In order to analyze those factors, we design and implement a communication layer. Our experimental analysis supports our research hypothesis. The Resource ReSerVation Protocol (RSVP) is designed to realize resource reservation. Our analysis of RSVP shows that using RSVP solely is insufficient to provide hard end-to-end quality of service in a high-speed sub-network. Analysis of the IEEE 802.lp protocol also supports the research hypothesis.
Design of a dataway processor for a parallel image signal processing system
NASA Astrophysics Data System (ADS)
Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu
1995-04-01
Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.
Real-time interactive 3D computer stereography for recreational applications
NASA Astrophysics Data System (ADS)
Miyazawa, Atsushi; Ishii, Motonaga; Okuzawa, Kazunori; Sakamoto, Ryuuichi
2008-02-01
With the increasing calculation costs of 3D computer stereography, low-cost, high-speed implementation of the latter requires effective distribution of computing resources. In this paper, we attempt to re-classify 3D display technologies on the basis of humans' 3D perception, in order to determine what level of presence or reality is required in recreational video game systems. We then discuss the design and implementation of stereography systems in two categories of the new classification.
Hyperswitch Network For Hypercube Computer
NASA Technical Reports Server (NTRS)
Chow, Edward; Madan, Herbert; Peterson, John
1989-01-01
Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
Scalable Optical-Fiber Communication Networks
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Peterson, John C.
1993-01-01
Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.
NASA Astrophysics Data System (ADS)
Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Zeise, Frederick F.
1993-07-01
A second generation digital optical computer (DOC II) has been developed which utilizes a RISC based operating system as its host. This 32 bit, high performance (12.8 GByte/sec), computing platform demonstrates a number of basic principals that are inherent to parallel free space optical interconnects such as speed (up to 1012 bit operations per second) and low power 1.2 fJ per bit). Although DOC II is a general purpose machine, special purpose applications have been developed and are currently being evaluated on the optical platform.
Computation of Large Turbulence Structures and Noise of Supersonic Jets
NASA Technical Reports Server (NTRS)
Tam, Christopher
1996-01-01
Our research effort concentrated on obtaining an understanding of the generation mechanisms and the prediction of the three components of supersonic jet noise. In addition, we also developed a computational method for calculating the mean flow of turbulent high-speed jets. Below is a short description of the highlights of our contributions in each of these areas: (a) Broadband shock associated noise, (b) Turbulent mixing noise, (c) Screech tones and impingement tones, (d) Computation of the mean flow of turbulent jets.
The development of an interim generalized gate logic software simulator
NASA Technical Reports Server (NTRS)
Mcgough, J. G.; Nemeroff, S.
1985-01-01
A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.
2017-07-01
11 MOUNT RELATIVE DISPLACEMENT CALCULATIONS ......................................................... 13 Computational Methods...RELATIVE DISPLACEMENT AND MITIGATION RATIO...Predicted SDOF Responses ...................................................14 Figure 11. Predicted Relative Displacement for 3 Hz 30% Damped Mount
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Measuring Speed Using a Computer--Several Techniques.
ERIC Educational Resources Information Center
Pearce, Jon M.
1988-01-01
Introduces three different techniques to facilitate the measurement of speed and the associated kinematics and dynamics using a computer. Discusses sensing techniques using optical or ultrasonic sensors, interfacing with a computer, software routines for the interfaces, and other applications. Provides circuit diagrams, pictures, and a program to…
Hardware accelerator for molecular dynamics: MDGRAPE-2
NASA Astrophysics Data System (ADS)
Susukita, Ryutaro; Ebisuzaki, Toshikazu; Elmegreen, Bruce G.; Furusawa, Hideaki; Kato, Kenya; Kawai, Atsushi; Kobayashi, Yoshinao; Koishi, Takahiro; McNiven, Geoffrey D.; Narumi, Tetsu; Yasuoka, Kenji
2003-10-01
We developed MDGRAPE-2, a hardware accelerator that calculates forces at high speed in molecular dynamics (MD) simulations. MDGRAPE-2 is connected to a PC or a workstation as an extension board. The sustained performance of one MDGRAPE-2 board is 15 Gflops, roughly equivalent to the peak performance of the fastest supercomputer processing element. One board is able to calculate all forces between 10 000 particles in 0.28 s (i.e. 310000 time steps per day). If 16 boards are connected to one computer and operated in parallel, this calculation speed becomes ˜10 times faster. In addition to MD, MDGRAPE-2 can be applied to gravitational N-body simulations, the vortex method and smoothed particle hydrodynamics in computational fluid dynamics.
Computational adaptive optics for broadband optical interferometric tomography of biological tissue
NASA Astrophysics Data System (ADS)
Boppart, Stephen A.
2015-03-01
High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.
Lin, Liqiang; Zeng, Xiaowei
2015-01-01
The focus of this work is to investigate spall fracture in polycrystalline materials under high-speed impact loading by using an atomistic-based interfacial zone model. We illustrate that for polycrystalline materials, increases in the potential energy ratio between grain boundaries and grains could cause a fracture transition from intergranular to transgranular mode. We also found out that the spall strength increases when there is a fracture transition from intergranular to transgranular. In addition, analysis of grain size, crystal lattice orientation and impact speed reveals that the spall strength increases as grain size or impact speed increases. PMID:26435546
NASA Technical Reports Server (NTRS)
Coogan, J. J.
1986-01-01
Modifications were designed for the B-737-100 Research Aircraft autobrake system hardware of the Advanced Transport Operating Systems (ATOPS) Program at Langley Research Center. These modifications will allow the on-board flight control computer to control the aircraft deceleration after landing to a continuously variable level for the purpose of executing automatic high speed turn-offs from the runway. A bread board version of the proposed modifications was built and tested in simulated stopping conditions. Test results, for various aircraft weights, turnoff speed, winds, and runway conditions show that the turnoff speeds are achieved generally with errors less than 1 ft/sec.
Fully integrated sub 100ps photon counting platform
NASA Astrophysics Data System (ADS)
Buckley, S. J.; Bellis, S. J.; Rosinger, P.; Jackson, J. C.
2007-02-01
Current state of the art high resolution counting modules, specifically designed for high timing resolution applications, are largely based on a computer card format. This has tended to result in a costly solution that is restricted to the computer it resides in. We describe a four channel timing module that interfaces to a computer via a USB port and operates with a resolution of less than 100 picoseconds. The core design of the system is an advanced field programmable gate array (FPGA) interfacing to a precision time interval measurement module, mass memory block and a high speed USB 2.0 serial data port. The FPGA design allows the module to operate in a number of modes allowing both continuous recording of photon events (time-tagging) and repetitive time binning. In time-tag mode the system reports, for each photon event, the high resolution time along with the chronological time (macro time) and the channel ID. The time-tags are uploaded in real time to a host computer via a high speed USB port allowing continuous storage to computer memory of up to 4 millions photons per second. In time-bin mode, binning is carried out with count rates up to 10 million photons per second. Each curve resides in a block of 128,000 time-bins each with a resolution programmable down to less than 100 picoseconds. Each bin has a limit of 65535 hits allowing autonomous curve recording until a bin reaches the maximum count or the system is commanded to halt. Due to the large memory storage, several curves/experiments can be stored in the system prior to uploading to the host computer for analysis. This makes this module ideal for integration into high timing resolution specific applications such as laser ranging and fluorescence lifetime imaging using techniques such as time correlated single photon counting (TCSPC).
NASA Astrophysics Data System (ADS)
Jiang, Yuning; Kang, Jinfeng; Wang, Xinan
2017-03-01
Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.
Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V
2010-06-01
Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Parallel-vector unsymmetric Eigen-Solver on high performance computers
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Jiangning, Qin
1993-01-01
The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Parallel peak pruning for scalable SMP contour tree computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Hamish A.; Weber, Gunther H.; Sewell, Christopher M.
As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this formmore » of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.« less
Integrated semiconductor-magnetic random access memory system
NASA Technical Reports Server (NTRS)
Katti, Romney R. (Inventor); Blaes, Brent R. (Inventor)
2001-01-01
The present disclosure describes a non-volatile magnetic random access memory (RAM) system having a semiconductor control circuit and a magnetic array element. The integrated magnetic RAM system uses CMOS control circuit to read and write data magnetoresistively. The system provides a fast access, non-volatile, radiation hard, high density RAM for high speed computing.
Architectural Aspects of Grid Computing and its Global Prospects for E-Science Community
NASA Astrophysics Data System (ADS)
Ahmad, Mushtaq
2008-05-01
The paper reviews the imminent Architectural Aspects of Grid Computing for e-Science community for scientific research and business/commercial collaboration beyond physical boundaries. Grid Computing provides all the needed facilities; hardware, software, communication interfaces, high speed internet, safe authentication and secure environment for collaboration of research projects around the globe. It provides highly fast compute engine for those scientific and engineering research projects and business/commercial applications which are heavily compute intensive and/or require humongous amounts of data. It also makes possible the use of very advanced methodologies, simulation models, expert systems and treasure of knowledge available around the globe under the umbrella of knowledge sharing. Thus it makes possible one of the dreams of global village for the benefit of e-Science community across the globe.
Atoche, Alejandro Castillo; Castillo, Javier Vázquez
2012-01-01
A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964
National research and education network
NASA Technical Reports Server (NTRS)
Villasenor, Tony
1991-01-01
Some goals of this network are as follows: Extend U.S. technological leadership in high performance computing and computer communications; Provide wide dissemination and application of the technologies both to the speed and the pace of innovation and to serve the national economy, national security, education, and the global environment; and Spur gains in the U.S. productivity and industrial competitiveness by making high performance computing and networking technologies an integral part of the design and production process. Strategies for achieving these goals are as follows: Support solutions to important scientific and technical challenges through a vigorous R and D effort; Reduce the uncertainties to industry for R and D and use of this technology through increased cooperation between government, industry, and universities and by the continued use of government and government funded facilities as a prototype user for early commercial HPCC products; and Support underlying research, network, and computational infrastructures on which U.S. high performance computing technology is based.
NASA Astrophysics Data System (ADS)
Wimer, N. T.; Mackoweicki, A. S.; Poludnenko, A. Y.; Hoffman, C.; Daily, J. W.; Rieker, G. B.; Hamlington, P.
2017-12-01
Results are presented from a joint computational and experimental research effort focused on understanding and characterizing wildland fire spread at small scales (roughly 1m-1mm) using direct numerical simulations (DNS) with chemical kinetics mechanisms that have been calibrated using data from high-speed laser diagnostics. The simulations are intended to directly resolve, with high physical accuracy, all small-scale fluid dynamic and chemical processes relevant to wildland fire spread. The high fidelity of the simulations is enabled by the calibration and validation of DNS sub-models using data from high-speed laser diagnostics. These diagnostics have the capability to measure temperature and chemical species concentrations, and are used here to characterize evaporation and pyrolysis processes in wildland fuels subjected to an external radiation source. The chemical kinetics code CHEMKIN-PRO is used to study and reduce complex reaction mechanisms for water removal, pyrolysis, and gas phase combustion during solid biomass burning. Simulations are then presented for a gaseous pool fire coupled with the resulting multi-step chemical reaction mechanisms, and the results are connected to the fundamental structure and spread of wildland fires. It is anticipated that the combined computational and experimental approach of this research effort will provide unprecedented access to information about chemical species, temperature, and turbulence during the entire pyrolysis, evaporation, ignition, and combustion process, thereby permitting more complete understanding of the physics that must be represented by coarse-scale numerical models of wildland fire spread.
Membrane-based actuation for high-speed single molecule force spectroscopy studies using AFM.
Sarangapani, Krishna; Torun, Hamdi; Finkler, Ofer; Zhu, Cheng; Degertekin, Levent
2010-07-01
Atomic force microscopy (AFM)-based dynamic force spectroscopy of single molecular interactions involves characterizing unbinding/unfolding force distributions over a range of pulling speeds. Owing to their size and stiffness, AFM cantilevers are adversely affected by hydrodynamic forces, especially at pulling speeds >10 microm/s, when the viscous drag becomes comparable to the unbinding/unfolding forces. To circumvent these adverse effects, we have fabricated polymer-based membranes capable of actuating commercial AFM cantilevers at speeds >or=100 microm/s with minimal viscous drag effects. We have used FLUENT, a computational fluid dynamics (CFD) software, to simulate high-speed pulling and fast actuation of AFM cantilevers and membranes in different experimental configurations. The simulation results support the experimental findings on a variety of commercial AFM cantilevers and predict significant reduction in drag forces when membrane actuators are used. Unbinding force experiments involving human antibodies using these membranes demonstrate that it is possible to achieve bond loading rates >or=10(6) pN/s, an order of magnitude greater than that reported with commercial AFM cantilevers and systems.
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
2012-01-01
The design-point and off-design performance of an embedded 1.5-stage portion of a variable-speed power turbine (VSPT) was assessed using Reynolds-Averaged Navier-Stokes (RANS) analyses with mixing-planes and sector-periodic, unsteady RANS analyses. The VSPT provides one means by which to effect the nearly 50 percent main-rotor speed change required for the NASA Large Civil Tilt-Rotor (LCTR) application. The change in VSPT shaft-speed during the LCTR mission results in blade-row incidence angle changes of as high as 55 . Negative incidence levels of this magnitude at takeoff operation give rise to a vortical flow structure in the pressure-side cove of a high-turn rotor that transports low-momentum flow toward the casing endwall. The intent of the effort was to assess the impact of unsteadiness of blade-row interaction on the time-mean flow and, specifically, to identify potential departure from the predicted trend of efficiency with shaft-speed change of meanline and 3-D RANS/mixing-plane analyses used for design.
Wilson, J Adam; Williams, Justin C
2009-01-01
The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.
Implementation of High Speed Distributed Data Acquisition System
NASA Astrophysics Data System (ADS)
Raju, Anju P.; Sekhar, Ambika
2012-09-01
This paper introduces a high speed distributed data acquisition system based on a field programmable gate array (FPGA). The aim is to develop a "distributed" data acquisition interface. The development of instruments such as personal computers and engineering workstations based on "standard" platforms is the motivation behind this effort. Using standard platforms as the controlling unit allows independence in hardware from a particular vendor and hardware platform. The distributed approach also has advantages from a functional point of view: acquisition resources become available to multiple instruments; the acquisition front-end can be physically remote from the rest of the instrument. High speed data acquisition system transmits data faster to a remote computer system through Ethernet interface. The data is acquired through 16 analog input channels. The input data commands are multiplexed and digitized and then the data is stored in 1K buffer for each input channel. The main control unit in this design is the 16 bit processor implemented in the FPGA. This 16 bit processor is used to set up and initialize the data source and the Ethernet controller, as well as control the flow of data from the memory element to the NIC. Using this processor we can initialize and control the different configuration registers in the Ethernet controller in a easy manner. Then these data packets are sending to the remote PC through the Ethernet interface. The main advantages of the using FPGA as standard platform are its flexibility, low power consumption, short design duration, fast time to market, programmability and high density. The main advantages of using Ethernet controller AX88796 over others are its non PCI interface, the presence of embedded SRAM where transmit and reception buffers are located and high-performance SRAM-like interface. The paper introduces the implementation of the distributed data acquisition using FPGA by VHDL. The main advantages of this system are high accuracy, high speed, real time monitoring.
Development of high-speed rolling-element bearings. A historical and technical perspective
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.
1982-01-01
Research on large-bore ball and roller bearings for aircraft engines is described. Tapered roller bearings and small-bore bearings are discussed. Temperature capabilities of rolling element bearings for aircraft engines have moved from 450 to 589 K (350 to 600 F) with increased reliability. High bearing speeds to 3 million DN can be achieved with a reliability exceeding that which was common in commercial aircraft. Capabilities of available bearing steels and lubricants were defined and established. Computer programs for the analysis and design of rolling element bearings were developed and experimentally verified. The reported work is a summary of NASA contributions to high performance engine and transmission bearing capabilities.
NASA Astrophysics Data System (ADS)
Britt, S.; Tsynkov, S.; Turkel, E.
2018-02-01
We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.
NASA Astrophysics Data System (ADS)
Wen, Sy-Bor; Bhaskar, Arun; Zhang, Hongjie
2018-07-01
A scanning digital lithography system using computer controlled digital spatial light modulator, spatial filter, infinity correct optical microscope and high precision translation stage is proposed and examined. Through utilizing the spatial filter to limit orders of diffraction modes for light delivered from the spatial light modulator, we are able to achieve diffraction limited deep submicron spatial resolution with the scanning digital lithography system by using standard one inch level optical components with reasonable prices. Raster scanning of this scanning digital lithography system using a high speed high precision x-y translation stage and piezo mount to real time adjust the focal position of objective lens allows us to achieve large area sub-micron resolved patterning with high speed (compared with e-beam lithography). It is determined in this study that to achieve high quality stitching of lithography patterns with raster scanning, a high-resolution rotation stage will be required to ensure the x and y directions of the projected pattern are in the same x and y translation directions of the nanometer precision x-y translation stage.
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
Fluid flow dynamics in MAS systems.
Wilhelm, Dirk; Purea, Armin; Engelke, Frank
2015-08-01
The turbine system and the radial bearing of a high performance magic angle spinning (MAS) probe with 1.3mm-rotor diameter has been analyzed for spinning rates up to 67kHz. We focused mainly on the fluid flow properties of the MAS system. Therefore, computational fluid dynamics (CFD) simulations and fluid measurements of the turbine and the radial bearings have been performed. CFD simulation and measurement results of the 1.3mm-MAS rotor system show relatively low efficiency (about 25%) compared to standard turbo machines outside the realm of MAS. However, in particular, MAS turbines are mainly optimized for speed and stability instead of efficiency. We have compared MAS systems for rotor diameter of 1.3-7mm converted to dimensionless values with classical turbomachinery systems showing that the operation parameters (rotor diameter, inlet mass flow, spinning rate) are in the favorable range. This dimensionless analysis also supports radial turbines for low speed MAS probes and diagonal turbines for high speed MAS probes. Consequently, a change from Pelton type MAS turbines to diagonal turbines might be worth considering for high speed applications. CFD simulations of the radial bearings have been compared with basic theoretical values proposing considerably smaller frictional loss values. The discrepancies might be due to the simple linear flow profile employed for the theoretical model. Frictional losses generated inside the radial bearings result in undesired heat-up of the rotor. The rotor surface temperature distribution computed by CFD simulations show a large temperature gradient over the rotor. Copyright © 2015 Elsevier Inc. All rights reserved.
2010-03-04
and their sensitivity to charge and flux fluctuations. The first type of superconducting qubit , the charge qubit , omits the inductance . There is no...nanostructured NbN superconducting nanowire detectors have achieved high efficiency and photon number resolution16,17. One approach to a high-efficiency single...resemble classical high- speed integrated circuits and can be readily fabricated using existing technologies. The basic physics behind superconducting qubits
Koike-Akino, Toshiaki; Duan, Chunjie; Parsons, Kieran; Kojima, Keisuke; Yoshida, Tsuyoshi; Sugihara, Takashi; Mizuochi, Takashi
2012-07-02
Fiber nonlinearity has become a major limiting factor to realize ultra-high-speed optical communications. We propose a fractionally-spaced equalizer which exploits a trained high-order statistics to deal with data-pattern dependent nonlinear impairments in fiber-optic communications. The computer simulation reveals that the proposed 3-tap equalizer improves Q-factor by more than 2 dB for long-haul transmissions of 5,230 km distance and 40 Gbps data rate. We also demonstrate that the joint use of a digital backpropagation (DBP) and the proposed equalizer offers an additional 1-2 dB performance improvement due to the channel shortening gain. A performance in high-speed transmissions of 100 Gbps and beyond is evaluated as well.
Development of High-Speed Fluorescent X-Ray Micro-Computed Tomography
NASA Astrophysics Data System (ADS)
Takeda, T.; Tsuchiya, Y.; Kuroe, T.; Zeniya, T.; Wu, J.; Lwin, Thet-Thet; Yashiro, T.; Yuasa, T.; Hyodo, K.; Matsumura, K.; Dilmanian, F. A.; Itai, Y.; Akatsuka, T.
2004-05-01
A high-speed fluorescent x-ray CT (FXCT) system using monochromatic synchrotron x rays was developed to detect very low concentration of medium-Z elements for biomedical use. The system is equipped two types of high purity germanium detectors, and fast electronics and software. Preliminary images of a 10mm diameter plastic phantom containing channels field with iodine solutions of different concentrations showed a minimum detection level of 0.002 mg I/ml at an in-plane spatial resolution of 100μm. Furthermore, the acquisition time was reduced about 1/2 comparing to previous system. The results indicate that FXCT is a highly sensitive imaging modality capable of detecting very low concentration of iodine, and that the method has potential in biomedical applications.
NASA Astrophysics Data System (ADS)
Tongchitpakdee, Chanin
With the advantage of modern high speed computers, there has been an increased interest in the use of first-principles based computational approaches for the aerodynamic modeling of horizontal axis wind turbine (HAWT). Since these approaches are based on the laws of conservation (mass, momentum, and energy), they can capture much of the physics in great detail. The ability to accurately predict the airloads and power output can greatly aid the designers in tailoring the aerodynamic and aeroelastic features of the configuration. First-principles based analyses are also valuable for developing active means (e.g., circulation control), and passive means (e.g., Gurney flaps) of reducing unsteady blade loads, mitigating stall, and for efficient capture of wind energy leading to more electrical power generation. In this present study, the aerodynamic performance of a wind turbine rotor equipped with circulation enhancement technology (trailing edge blowing or Gurney flaps) is investigated using a three-dimensional unsteady viscous flow analysis. The National Renewable Energy Laboratory (NREL) Phase VI horizontal axis wind turbine is chosen as the baseline configuration. Prior to its use in exploring these concepts, the flow solver is validated with the experimental data for the baseline case under yawed flow conditions. Results presented include radial distribution of normal and tangential forces, shaft torque, root flap moment, surface pressure distributions at selected radial locations, and power output. Results show that good agreement has been for a range of wind speeds and yaw angles, where the flow is attached. At high wind speeds, however, where the flow is fully separated, it was found that the fundamental assumptions behind this present methodology breaks down for the baseline turbulence model (Spalart-Allmaras model), giving less accurate results. With the implementation of advanced turbulence model, Spalart-Allmaras Detached Eddy Simulation (SA-DES), the accuracy of the results at high wind speeds are improved. Results of circulation enhancement concepts show that, at low wind speed (attached flow) conditions, a Coanda jet at the trailing edge of the rotor blade is effective at increasing circulation resulting in an increase of lift and the chordwise thrust force. This leads to an increased amount of net power generation compared to the baseline configuration for moderate blowing coefficients. The effects of jet slot height and pulsed jet are also investigated in this study. A passive Gurney flap was found to increase the bound circulation and produce increased power in a manner similar to the Coanda jet. At high wind speed where the flow is separated, both the Coanda jet and Gurney flap become ineffective. Results of leading edge blowing indicate that a leading edge blowing jet is found to be beneficial in increasing power generation at high wind speeds. The effect of Gurney flap angle is also studied. Gurney flap angle has significant influence in power generation. Higher power output is obtained at higher flap angles.
NASA Technical Reports Server (NTRS)
Edwards, John W.; Malone, John B.
1992-01-01
The current status of computational methods for unsteady aerodynamics and aeroelasticity is reviewed. The key features of challenging aeroelastic applications are discussed in terms of the flowfield state: low-angle high speed flows and high-angle vortex-dominated flows. The critical role played by viscous effects in determining aeroelastic stability for conditions of incipient flow separation is stressed. The need for a variety of flow modeling tools, from linear formulations to implementations of the Navier-Stokes equations, is emphasized. Estimates of computer run times for flutter calculations using several computational methods are given. Applications of these methods for unsteady aerodynamic and transonic flutter calculations for airfoils, wings, and configurations are summarized. Finally, recommendations are made concerning future research directions.
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Fang, Jun; Kurbatskii, Konstantin A.
1996-01-01
A set of nonhomogeneous radiation and outflow conditions which automatically generate prescribed incoming acoustic or vorticity waves and, at the same time, are transparent to outgoing sound waves produced internally in a finite computation domain is proposed. This type of boundary condition is needed for the numerical solution of many exterior aeroacoustics problems. In computational aeroacoustics, the computation scheme must be as nondispersive ans nondissipative as possible. It must also support waves with wave speeds which are nearly the same as those of the original linearized Euler equations. To meet these requirements, a high-order/large-stencil scheme is necessary The proposed nonhomogeneous radiation and outflow boundary conditions are designed primarily for use in conjunction with such high-order/large-stencil finite difference schemes.
A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter
NASA Technical Reports Server (NTRS)
Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe
1996-01-01
Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Rose, W. C.
1973-01-01
The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.
Automated drafting system uses computer techniques
NASA Technical Reports Server (NTRS)
Millenson, D. H.
1966-01-01
Automated drafting system produces schematic and block diagrams from the design engineers freehand sketches. This system codes conventional drafting symbols and their coordinate locations on standard size drawings for entry on tapes that are used to drive a high speed photocomposition machine.
Computational nanoelectronics towards: design, analysis, synthesis, and fundamental limits
NASA Technical Reports Server (NTRS)
Klimeck, G.
2003-01-01
This seminar will review the development of a comprehensive nanoelectronic modeling tool (NEMO 1-D and NEMO 3-D) and its application to high-speed electronics (resonant tunneling diodes) and IR detectors and lasers (quantum dots and 1-D heterostructures).
NASA Astrophysics Data System (ADS)
Gruber, Karin; Serafin, Stefano; Grubišić, Vanda; Dorninger, Manfred; Zauner, Rudolf; Fink, Martin
2014-05-01
A crucial step in planning new wind farms is the estimation of the amount of wind energy that can be harvested in possible target sites. Wind resource assessment traditionally entails deployment of masts equipped for wind speed measurements at several heights for a reasonably long period of time. Simplified linear models of atmospheric flow are then used for a spatial extrapolation of point measurements to a wide area. While linear models have been successfully applied in the wind resource assessment in plains and offshore, their reliability in complex terrain is generally poor. This represents a major limitation to wind resource assessment in Austria, where high-altitude locations are being considered for new plant sites, given the higher frequency of sustained winds at such sites. The limitations of linear models stem from two key assumptions in their formulation, the neutral stratification and attached boundary-layer flow, both of which often break down in complex terrain. Consequently, an accurate modeling of near-surface flow over mountains requires the adoption of a NWP model with high horizontal and vertical resolution. This study explores the wind potential of a site in Styria in the North-Eastern Alps. The WRF model is used for simulations with a maximum horizontal resolution of 800 m. Three nested computational domains are defined, with the innermost one encompassing a stretch of the relatively broad Enns Valley, flanked by the main crest of the Alps in the south and the Nördliche Kalkalpen of similar height in the north. In addition to the simulation results, we use data from fourteen 10-m wind measurement sites (of which 7 are located within valleys and 5 near mountain tops) and from 2 masts with anemometers at several heights (at hillside locations) in an area of 1600 km2 around the target site. The potential for wind energy production is assessed using the mean wind speed and turbulence intensity at hub height. The capacity factor is also evaluated, considering the frequency of wind speed between cut-in and cut-out speed and of winds with a low vertical velocity component only. Wind turbines do not turn on at wind speeds below cut-in speed. Wind turbines are taken off from the generator in the case of wind speeds higher than cut-out speed and inclination angles of the wind vector greater than 8o. All of these parameters were computed at each model grid point in the innermost domain in order to map their spatial variability. The results show that in complex terrain the annual mean wind speed at hub height is not sufficient to predict the capacity factor of a turbine; vertical wind speed and the frequency of horizontal wind speed out of the range of cut-in and cut-out speed contribute substantially to a reduction of the energy harvest and locally high turbulence may considerably raise the building costs.
High-Speed Computation of the Kleene Star in Max-Plus Algebraic System Using a Cell Broadband Engine
NASA Astrophysics Data System (ADS)
Goto, Hiroyuki
This research addresses a high-speed computation method for the Kleene star of the weighted adjacency matrix in a max-plus algebraic system. We focus on systems whose precedence constraints are represented by a directed acyclic graph and implement it on a Cell Broadband Engine™ (CBE) processor. Since the resulting matrix gives the longest travel times between two adjacent nodes, it is often utilized in scheduling problem solvers for a class of discrete event systems. This research, in particular, attempts to achieve a speedup by using two approaches: parallelization and SIMDization (Single Instruction, Multiple Data), both of which can be accomplished by a CBE processor. The former refers to a parallel computation using multiple cores, while the latter is a method whereby multiple elements are computed by a single instruction. Using the implementation on a Sony PlayStation 3™ equipped with a CBE processor, we found that the SIMDization is effective regardless of the system's size and the number of processor cores used. We also found that the scalability of using multiple cores is remarkable especially for systems with a large number of nodes. In a numerical experiment where the number of nodes is 2000, we achieved a speedup of 20 times compared with the method without the above techniques.
Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation
NASA Astrophysics Data System (ADS)
Zhou, XueFei
2018-04-01
With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.
High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System
Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram
2014-01-01
We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second. PMID:24891848
High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System.
Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram
2014-01-01
We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second.
High-performance biocomputing for simulating the spread of contagion over large contact networks
2012-01-01
Background Many important biological problems can be modeled as contagion diffusion processes over interaction networks. This article shows how the EpiSimdemics interaction-based simulation system can be applied to the general contagion diffusion problem. Two specific problems, computational epidemiology and human immune system modeling, are given as examples. We then show how the graphics processing unit (GPU) within each compute node of a cluster can effectively be used to speed-up the execution of these types of problems. Results We show that a single GPU can accelerate the EpiSimdemics computation kernel by a factor of 6 and the entire application by a factor of 3.3, compared to the execution time on a single core. When 8 CPU cores and 2 GPU devices are utilized, the speed-up of the computational kernel increases to 9.5. When combined with effective techniques for inter-node communication, excellent scalability can be achieved without significant loss of accuracy in the results. Conclusions We show that interaction-based simulation systems can be used to model disparate and highly relevant problems in biology. We also show that offloading some of the work to GPUs in distributed interaction-based simulations can be an effective way to achieve increased intra-node efficiency. PMID:22537298
Investigations for Supersonic Transports at Transonic and Supersonic Conditions
NASA Technical Reports Server (NTRS)
Rivers, S. Melissa B.; Owens, Lewis R.; Wahls, Richard A.
2007-01-01
Several computational studies were conducted as part of NASA s High Speed Research Program. Results of turbulence model comparisons from two studies on supersonic transport configurations performed during the NASA High-Speed Research program are given. The effects of grid topology and the representation of the actual wind tunnel model geometry are also investigated. Results are presented for both transonic conditions at Mach 0.90 and supersonic conditions at Mach 2.48. A feature of these two studies was the availability of higher Reynolds number wind tunnel data with which to compare the computational results. The transonic wind tunnel data was obtained in the National Transonic Facility at NASA Langley, and the supersonic data was obtained in the Boeing Polysonic Wind Tunnel. The computational data was acquired using a state of the art Navier-Stokes flow solver with a wide range of turbulence models implemented. The results show that the computed forces compare reasonably well with the experimental data, with the Baldwin-Lomax with Degani-Schiff modifications and the Baldwin-Barth models showing the best agreement for the transonic conditions and the Spalart-Allmaras model showing the best agreement for the supersonic conditions. The transonic results were more sensitive to the choice of turbulence model than were the supersonic results.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Use of Transition Modeling to Enable the Computation of Losses for Variable-Speed Power Turbine
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2012-01-01
To investigate the penalties associated with using a variable speed power turbine (VSPT) in a rotorcraft capable of vertical takeoff and landing, various analysis tools are required. Such analysis tools must be able to model the flow accurately within the operating envelope of VSPT. For power turbines low Reynolds numbers and a wide range of the incidence angles, positive and negative, due to the variation in the shaft speed at relatively fixed corrected flows, characterize this envelope. The flow in the turbine passage is expected to be transitional and separated at high incidence. The turbulence model of Walters and Leylek was implemented in the NASA Glenn-HT code to enable a more accurate analysis of such flows. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Heat transfer computations were performed because it is a good marker for transition. The final goal is to be able to compute the aerodynamic losses. Armed with the new transition model, total pressure losses for three-dimensional flow of an Energy Efficient Engine (E3) tip section cascade for a range of incidence angles were computed in anticipation of the experimental data. The results obtained form a loss bucket for the chosen blade.
Component-based model to predict aerodynamic noise from high-speed train pantographs
NASA Astrophysics Data System (ADS)
Latorre Iglesias, E.; Thompson, D. J.; Smith, M. G.
2017-04-01
At typical speeds of modern high-speed trains the aerodynamic noise produced by the airflow over the pantograph is a significant source of noise. Although numerical models can be used to predict this they are still very computationally intensive. A semi-empirical component-based prediction model is proposed to predict the aerodynamic noise from train pantographs. The pantograph is approximated as an assembly of cylinders and bars with particular cross-sections. An empirical database is used to obtain the coefficients of the model to account for various factors: incident flow speed, diameter, cross-sectional shape, yaw angle, rounded edges, length-to-width ratio, incoming turbulence and directivity. The overall noise from the pantograph is obtained as the incoherent sum of the predicted noise from the different pantograph struts. The model is validated using available wind tunnel noise measurements of two full-size pantographs. The results show the potential of the semi-empirical model to be used as a rapid tool to predict aerodynamic noise from train pantographs.
A Strassen-Newton algorithm for high-speed parallelizable matrix inversion
NASA Technical Reports Server (NTRS)
Bailey, David H.; Ferguson, Helaman R. P.
1988-01-01
Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.
Real-time computational photon-counting LiDAR
NASA Astrophysics Data System (ADS)
Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles
2018-03-01
The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.
NASA Astrophysics Data System (ADS)
Kao, Jui-Hsiang; Tseng, Po-Yuan
2018-01-01
The objective of this paper is to describe the application of CFD (Computational fluid dynamics) technology in the matching of turbine blades and generator to increase the efficiency of a vertical axis wind turbine (VAWT). A VAWT is treated as the study case here. The SST (Shear-Stress Transport) k-ω turbulence model with SIMPLE algorithm method in transient state is applied to solve the T (torque)-N (r/min) curves of the turbine blades at different wind speed. The T-N curves of the generator at different CV (constant voltage) model are measured. Thus, the T-N curves of the turbine blades at different wind speed can be matched by the T-N curves of the generator at different CV model to find the optimal CV model. As the optimal CV mode is selected, the characteristics of the operating points, such as tip speed ratio, revolutions per minute, blade torque, and efficiency, can be identified. The results show that, if the two systems are matched well, the final output power at a high wind speed of 9-10 m/s will be increased by 15%.
Belle II grid computing: An overview of the distributed data management system.
NASA Astrophysics Data System (ADS)
Bansal, Vikas; Schram, Malachi; Belle Collaboration, II
2017-01-01
The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50/ab of e +e- collision data, about 50 times larger than the data set of the Belle experiment. The computing requirements of Belle II are comparable to those of a Run I LHC experiment. Computing at this scale requires efficient use of the compute grids in North America, Asia and Europe and will take advantage of upgrades to the high-speed global network. We present the architecture of data flow and data handling as a part of the Belle II computing infrastructure.
A feasibility study of damage detection in beams using high-speed camera (Conference Presentation)
NASA Astrophysics Data System (ADS)
Wan, Chao; Yuan, Fuh-Gwo
2017-04-01
In this paper a method for damage detection in beam structures using high-speed camera is presented. Traditional methods of damage detection in structures typically involve contact (i.e., piezoelectric sensor or accelerometer) or non-contact sensors (i.e., laser vibrometer) which can be costly and time consuming to inspect an entire structure. With the popularity of the digital camera and the development of computer vision technology, video cameras offer a viable capability of measurement including higher spatial resolution, remote sensing and low-cost. In the study, a damage detection method based on the high-speed camera was proposed. The system setup comprises a high-speed camera and a line-laser which can capture the out-of-plane displacement of a cantilever beam. The cantilever beam with an artificial crack was excited and the vibration process was recorded by the camera. A methodology called motion magnification, which can amplify subtle motions in a video is used for modal identification of the beam. A finite element model was used for validation of the proposed method. Suggestions for applications of this methodology and challenges in future work will be discussed.
Generation 1.5 High Speed Civil Transport (HSCT) Exhaust Nozzle Program
NASA Technical Reports Server (NTRS)
Thayer, E. B.; Gamble, E. J.; Guthrie, A. R.; Kehret, D. F.; Barber, T. J.; Hendricks, G. J.; Nagaraja, K. S.; Minardi, J. E.
2004-01-01
The objective of this program was to conduct an experimental and analytical evaluation of low noise exhaust nozzles suitable for future High-Speed Civil Transport (HSCT) aircraft. The experimental portion of the program involved parametric subscale performance model tests of mixer/ejector nozzles in the takeoff mode, and high-speed tests of mixer/ejectors converted to two-dimensional convergent-divergent (2-D/C-D), plug, and single expansion ramp nozzles (SERN) in the cruise mode. Mixer/ejector results show measured static thrust coefficients at secondary flow entrainment levels of 70 percent of primary flow. Results of the high-speed performance tests showed that relatively long, straight-wall, C-D nozzles could meet supersonic cruise thrust coefficient goal of 0.982; but the plug, ramp, and shorter C-D nozzles required isentropic contours to reach the same level of performance. The computational fluid dynamic (CFD) study accurately predicted mixer/ejector pressure distributions and shock locations. Heat transfer studies showed that a combination of insulation and convective cooling was more effective than film cooling for nonafterburning, low-noise nozzles. The thrust augmentation study indicated potential benefits for use of ejector nozzles in the subsonic cruise mode if the ejector inlet contains a sonic throat plane.
Reducing the stochasticity of crystal nucleation to enable subnanosecond memory writing
NASA Astrophysics Data System (ADS)
Rao, Feng; Ding, Keyuan; Zhou, Yuxing; Zheng, Yonghui; Xia, Mengjiao; Lv, Shilong; Song, Zhitang; Feng, Songlin; Ronneberger, Ider; Mazzarello, Riccardo; Zhang, Wei; Ma, Evan
2017-12-01
Operation speed is a key challenge in phase-change random-access memory (PCRAM) technology, especially for achieving subnanosecond high-speed cache memory. Commercialized PCRAM products are limited by the tens of nanoseconds writing speed, originating from the stochastic crystal nucleation during the crystallization of amorphous germanium antimony telluride (Ge2Sb2Te5). Here, we demonstrate an alloying strategy to speed up the crystallization kinetics. The scandium antimony telluride (Sc0.2Sb2Te3) compound that we designed allows a writing speed of only 700 picoseconds without preprogramming in a large conventional PCRAM device. This ultrafast crystallization stems from the reduced stochasticity of nucleation through geometrically matched and robust scandium telluride (ScTe) chemical bonds that stabilize crystal precursors in the amorphous state. Controlling nucleation through alloy design paves the way for the development of cache-type PCRAM technology to boost the working efficiency of computing systems.
The Need for Optical Means as an Alternative for Electronic Computing
NASA Technical Reports Server (NTRS)
Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)
2001-01-01
An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.
NASA Astrophysics Data System (ADS)
Hyun, Jae-Sang; Li, Beiwen; Zhang, Song
2017-07-01
This paper presents our research findings on high-speed high-accuracy three-dimensional shape measurement using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of computer-generated 8-bit sinusoidal patterns (a.k.a., the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: a commercially available inexpensive projector and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.
Wing Leading Edge Concepts for Noise Reduction
NASA Technical Reports Server (NTRS)
Shmilovich, Arvin; Yadlin, Yoram; Pitera, David M.
2010-01-01
This study focuses on the development of wing leading edge concepts for noise reduction during high-lift operations, without compromising landing stall speeds, stall characteristics or cruise performance. High-lift geometries, which can be obtained by conventional mechanical systems or morphing structures have been considered. A systematic aerodynamic analysis procedure was used to arrive at several promising configurations. The aerodynamic design of new wing leading edge shapes is obtained from a robust Computational Fluid Dynamics procedure. Acoustic benefits are qualitatively established through the evaluation of the computed flow fields.
Engine speed control apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishii, M.; Miyazaki, M.; Nakamura, N.
1986-11-04
This patent describes an engine speed control apparatus. The system comprises an actuator for adjusting an engine speed, a first unit for computing a desired engine speed, a second unit for detecting the actual engine speed, and a third unit for detecting the difference between the outputs of the first and second units. The system also includes a fourth unit for computing a control pulse width for the actuator in accordance with the output of the third unit, a fifth unit for generating a control signal, a sixth unit for driving the actuator in response to the output of themore » fifth unit, and a seventh unit for computing an optimal halt time to interrupt the driving of the actuator. The actuator is driven intermittently in conformity in the control pulse width and the halt time.« less
Direct Measurement of the Speed of Sound Using a Microphone and a Speaker
ERIC Educational Resources Information Center
Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.
2014-01-01
We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-22
..., Airborne Automatic Dead Reckoning Computer Equipment Utilizing Aircraft Heading and Doppler Ground Speed.... ACTION: Notice of intent to cancel Technical Standard Order (TSO)-C68a, Airborne automatic dead reckoning... dead reckoning computer equipment utilizing aircraft heading and Doppler ground speed and drift angle...
NASA Astrophysics Data System (ADS)
Anderson, Kevin; Lin, Jun T.; Wong, Alexander J.
2017-11-01
Research findings of an experimental and numerical investigation of windage losses in the small annular air gap region between the stator and rotor of a high speed electric motor are presented herein. The experimental set-up is used to empirically measure the windage losses in the motor by measuring torque and rotational speed. The motor rotor spins at roughly 30,000 rpm and the rotor sets up windage losses on the order of 100 W. Axial air flow of 200 L/min is used to cool the motor, thus setting up a pseudo Taylor-Couette Poiseuille type of flow. Details of the experimental test apparatus, instrumentation and data acquisition are given. Experimental data for spin-down (both actively and passively cooled) and calibration of bearing windage losses are discussed. A Computational Fluid Dynamics (CFD) model is developed and used to predict the torque speed curve and windage losses in the motor. The CFD model is correlated with the experimental data. The CFD model is also used to predict the formation of the Taylor-Couette cells in the small gap region of the high speed motor. Results for windage losses, spin-down time constant, bearing losses, and torque of the motor versus cooling air mass flow rate and rotational speed are presented in this study. Mechanical Engineering.
Large-scale Advanced Prop-fan (LAP) high speed wind tunnel test report
NASA Technical Reports Server (NTRS)
Campbell, William A.; Wainauski, Harold S.; Arseneaux, Peter J.
1988-01-01
High Speed Wind Tunnel testing of the SR-7L Large Scale Advanced Prop-Fan (LAP) is reported. The LAP is a 2.74 meter (9.0 ft) diameter, 8-bladed tractor type rated for 4475 KW (6000 SHP) at 1698 rpm. It was designated and built by Hamilton Standard under contract to the NASA Lewis Research Center. The LAP employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Testing was conducted in the ONERA S1-MA Atmospheric Wind Tunnel in Modane, France. The test objectives were to confirm that the LAP is free from high speed classical flutter, determine the structural and aerodynamic response to angular inflow, measure blade surface pressures (static and dynamic) and evaluate the aerodynamic performance at various blade angles, rotational speeds and Mach numbers. The measured structural and aerodynamic performance of the LAP correlated well with analytical predictions thereby providing confidence in the computer prediction codes used for the design. There were no signs of classical flutter throughout all phases of the test up to and including the 0.84 maximum Mach number achieved. Steady and unsteady blade surface pressures were successfully measured for a wide range of Mach numbers, inflow angles, rotational speeds and blade angles. No barriers were discovered that would prevent proceeding with the PTA (Prop-Fan Test Assessment) Flight Test Program scheduled for early 1987.
Sturm, Alexandra; Rozenman, Michelle; Piacentini, John C; McGough, James J; Loo, Sandra K; McCracken, James T
2018-03-20
Predictors of math achievement in attention-deficit/hyperactivity disorder (ADHD) are not well-known. To address this gap in the literature, we examined individual differences in neurocognitive functioning domains on math computation in a cross-sectional sample of youth with ADHD. Gender and anxiety symptoms were explored as potential moderators. The sample consisted of 281 youth (aged 8-15 years) diagnosed with ADHD. Neurocognitive tasks assessed auditory-verbal working memory, visuospatial working memory, and processing speed. Auditory-verbal working memory speed significantly predicted math computation. A three-way interaction revealed that at low levels of anxious perfectionism, slower processing speed predicted poorer math computation for boys compared to girls. These findings indicate the uniquely predictive values of auditory-verbal working memory and processing speed on math computation, and their differential moderation. These findings provide preliminary support that gender and anxious perfectionism may influence the relationship between neurocognitive functioning and academic achievement.
NASA Technical Reports Server (NTRS)
Harrison, D. C.; Sandler, H.; Miller, H. A.
1975-01-01
The present collection of papers outlines advances in ultrasonography, scintigraphy, and commercialization of medical technology as applied to cardiovascular diagnosis in research and clinical practice. Particular attention is given to instrumentation, image processing and display. As necessary concomitants to mathematical analysis, recently improved magnetic recording methods using tape or disks and high-speed computers of large capacity are coming into use. Major topics include Doppler ultrasonic techniques, high-speed cineradiography, three-dimensional imaging of the myocardium with isotopes, sector-scanning echocardiography, and commercialization of the echocardioscope. Individual items are announced in this issue.
True logarithmic amplification of frequency clock in SS-OCT for calibration
Liu, Bin; Azimi, Ehsan; Brezinski, Mark E.
2011-01-01
With swept source optical coherence tomography (SS-OCT), imprecise signal calibration prevents optimal imaging of biological tissues such as coronary artery. This work demonstrates an approach using a true logarithmic amplifier to precondition the clock signal, with the effort to minimize the noises and phase errors for optimal calibration. This method was validated and tested with a high-speed SS-OCT. The experimental results manifest its superior ability on optimization of the calibration and improvement of the imaging performance. Particularly, this hardware-based approach is suitable for real-time calibration in a high-speed system where computation time is constrained. PMID:21698036
Tokuhisa, Atsushi; Arai, Junya; Joti, Yasumasa; Ohno, Yoshiyuki; Kameyama, Toyohisa; Yamamoto, Keiji; Hatanaka, Masayuki; Gerofi, Balazs; Shimada, Akio; Kurokawa, Motoyoshi; Shoji, Fumiyoshi; Okada, Kensuke; Sugimoto, Takashi; Yamaga, Mitsuhiro; Tanaka, Ryotaro; Yokokawa, Mitsuo; Hori, Atsushi; Ishikawa, Yutaka; Hatsui, Takaki; Go, Nobuhiro
2013-11-01
Single-particle coherent X-ray diffraction imaging using an X-ray free-electron laser has the potential to reveal the three-dimensional structure of a biological supra-molecule at sub-nanometer resolution. In order to realise this method, it is necessary to analyze as many as 1 × 10(6) noisy X-ray diffraction patterns, each for an unknown random target orientation. To cope with the severe quantum noise, patterns need to be classified according to their similarities and average similar patterns to improve the signal-to-noise ratio. A high-speed scalable scheme has been developed to carry out classification on the K computer, a 10PFLOPS supercomputer at RIKEN Advanced Institute for Computational Science. It is designed to work on the real-time basis with the experimental diffraction pattern collection at the X-ray free-electron laser facility SACLA so that the result of classification can be feedback for optimizing experimental parameters during the experiment. The present status of our effort developing the system and also a result of application to a set of simulated diffraction patterns is reported. About 1 × 10(6) diffraction patterns were successfully classificatied by running 255 separate 1 h jobs in 385-node mode.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Tokuhisa, Atsushi; Arai, Junya; Joti, Yasumasa; Ohno, Yoshiyuki; Kameyama, Toyohisa; Yamamoto, Keiji; Hatanaka, Masayuki; Gerofi, Balazs; Shimada, Akio; Kurokawa, Motoyoshi; Shoji, Fumiyoshi; Okada, Kensuke; Sugimoto, Takashi; Yamaga, Mitsuhiro; Tanaka, Ryotaro; Yokokawa, Mitsuo; Hori, Atsushi; Ishikawa, Yutaka; Hatsui, Takaki; Go, Nobuhiro
2013-01-01
Single-particle coherent X-ray diffraction imaging using an X-ray free-electron laser has the potential to reveal the three-dimensional structure of a biological supra-molecule at sub-nanometer resolution. In order to realise this method, it is necessary to analyze as many as 1 × 106 noisy X-ray diffraction patterns, each for an unknown random target orientation. To cope with the severe quantum noise, patterns need to be classified according to their similarities and average similar patterns to improve the signal-to-noise ratio. A high-speed scalable scheme has been developed to carry out classification on the K computer, a 10PFLOPS supercomputer at RIKEN Advanced Institute for Computational Science. It is designed to work on the real-time basis with the experimental diffraction pattern collection at the X-ray free-electron laser facility SACLA so that the result of classification can be feedback for optimizing experimental parameters during the experiment. The present status of our effort developing the system and also a result of application to a set of simulated diffraction patterns is reported. About 1 × 106 diffraction patterns were successfully classificatied by running 255 separate 1 h jobs in 385-node mode. PMID:24121336
NASA Astrophysics Data System (ADS)
Limmer, Steffen; Fey, Dietmar
2013-07-01
Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.
Multiple running speed signals in medial entorhinal cortex
Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.
2016-01-01
Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460
Electro-Optic Data Acquisition and Processing.
Methods for the analysis of electro - optic relaxation data are discussed. Emphasis is on numerical methods using high speed computers. A data acquisition system using a minicomputer for data manipulation is described. Relationship of the results obtained here to other possible uses is given. (Author)
Data Handling and Communication
NASA Astrophysics Data System (ADS)
Hemmer, FréDéRic Giorgio Innocenti, Pier
The following sections are included: * Introduction * Computing Clusters and Data Storage: The New Factory and Warehouse * Local Area Networks: Organizing Interconnection * High-Speed Worldwide Networking: Accelerating Protocols * Detector Simulation: Events Before the Event * Data Analysis and Programming Environment: Distilling Information * World Wide Web: Global Networking * References
The detectability of supernovae against elliptical galactic disks.
NASA Astrophysics Data System (ADS)
Pearce, E. C.
A 75 cm telescope has been automated with a Prime 300 mini-computer to search approximately 250 galaxies per hour for young supernovae. The high-speed star-location and comparison algorithms used in the Digitized Astronomy Supernova Search (DASS) system is described.
Transistor analogs of emergent iono-neuronal dynamics.
Rachmuth, Guy; Poon, Chi-Sang
2008-06-01
Neuromorphic analog metal-oxide-silicon (MOS) transistor circuits promise compact, low-power, and high-speed emulations of iono-neuronal dynamics orders-of-magnitude faster than digital simulation. However, their inherently limited input voltage dynamic range vs power consumption and silicon die area tradeoffs makes them highly sensitive to transistor mismatch due to fabrication inaccuracy, device noise, and other nonidealities. This limitation precludes robust analog very-large-scale-integration (aVLSI) circuits implementation of emergent iono-neuronal dynamics computations beyond simple spiking with limited ion channel dynamics. Here we present versatile neuromorphic analog building-block circuits that afford near-maximum voltage dynamic range operating within the low-power MOS transistor weak-inversion regime which is ideal for aVLSI implementation or implantable biomimetic device applications. The fabricated microchip allowed robust realization of dynamic iono-neuronal computations such as coincidence detection of presynaptic spikes or pre- and postsynaptic activities. As a critical performance benchmark, the high-speed and highly interactive iono-neuronal simulation capability on-chip enabled our prompt discovery of a minimal model of chaotic pacemaker bursting, an emergent iono-neuronal behavior of fundamental biological significance which has hitherto defied experimental testing or computational exploration via conventional digital or analog simulations. These compact and power-efficient transistor analogs of emergent iono-neuronal dynamics open new avenues for next-generation neuromorphic, neuroprosthetic, and brain-machine interface applications.