Science.gov

Sample records for adaptable high-performance on-board

  1. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  2. Adaptive approach for on-board impedance parameters and voltage estimation of lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Farmann, Alexander; Waag, Wladislaw; Sauer, Dirk Uwe

    2015-12-01

    Robust algorithms using reduced order equivalent circuit model (ECM) for an accurate and reliable estimation of battery states in various applications become more popular. In this study, a novel adaptive, self-learning heuristic algorithm for on-board impedance parameters and voltage estimation of lithium-ion batteries (LIBs) in electric vehicles is introduced. The presented approach is verified using LIBs with different composition of chemistries (NMC/C, NMC/LTO, LFP/C) at different aging states. An impedance-based reduced order ECM incorporating ohmic resistance and a combination of a constant phase element and a resistance (so-called ZARC-element) is employed. Existing algorithms in vehicles are much more limited in the complexity of the ECMs. The algorithm is validated using seven day real vehicle data with high temperature variation including very low temperatures (from -20 °C to +30 °C) at different Depth-of-Discharges (DoDs). Two possibilities to approximate both ZARC-elements with finite number of RC-elements on-board are shown and the results of the voltage estimation are compared. Moreover, the current dependence of the charge-transfer resistance is considered by employing Butler-Volmer equation. Achieved results indicate that both models yield almost the same grade of accuracy.

  3. On-board multispectral classification study. Volume 2: Supplementary tasks. [adaptive control

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The operational tasks of the onboard multispectral classification study were defined. These tasks include: sensing characteristics for future space applications; information adaptive systems architectural approaches; data set selection criteria; and onboard functional requirements for interfacing with global positioning satellites.

  4. Intelligent adaptive nonlinear flight control for a high performance aircraft with neural networks.

    PubMed

    Savran, Aydogan; Tasaltin, Ramazan; Becerikli, Yasar

    2006-04-01

    This paper describes the development of a neural network (NN) based adaptive flight control system for a high performance aircraft. The main contribution of this work is that the proposed control system is able to compensate the system uncertainties, adapt to the changes in flight conditions, and accommodate the system failures. The underlying study can be considered in two phases. The objective of the first phase is to model the dynamic behavior of a nonlinear F-16 model using NNs. Therefore a NN-based adaptive identification model is developed for three angular rates of the aircraft. An on-line training procedure is developed to adapt the changes in the system dynamics and improve the identification accuracy. In this procedure, a first-in first-out stack is used to store a certain history of the input-output data. The training is performed over the whole data in the stack at every stage. To speed up the convergence rate and enhance the accuracy for achieving the on-line learning, the Levenberg-Marquardt optimization method with a trust region approach is adapted to train the NNs. The objective of the second phase is to develop intelligent flight controllers. A NN-based adaptive PID control scheme that is composed of an emulator NN, an estimator NN, and a discrete time PID controller is developed. The emulator NN is used to calculate the system Jacobian required to train the estimator NN. The estimator NN, which is trained on-line by propagating the output error through the emulator, is used to adjust the PID gains. The NN-based adaptive PID control system is applied to control three angular rates of the nonlinear F-16 model. The body-axis pitch, roll, and yaw rates are fed back via the PID controllers to the elevator, aileron, and rudder actuators, respectively. The resulting control system has learning, adaptation, and fault-tolerant abilities. It avoids the storage and interpolation requirements for the too many controller parameters of a typical flight control

  5. Intelligent adaptive nonlinear flight control for a high performance aircraft with neural networks.

    PubMed

    Savran, Aydogan; Tasaltin, Ramazan; Becerikli, Yasar

    2006-04-01

    This paper describes the development of a neural network (NN) based adaptive flight control system for a high performance aircraft. The main contribution of this work is that the proposed control system is able to compensate the system uncertainties, adapt to the changes in flight conditions, and accommodate the system failures. The underlying study can be considered in two phases. The objective of the first phase is to model the dynamic behavior of a nonlinear F-16 model using NNs. Therefore a NN-based adaptive identification model is developed for three angular rates of the aircraft. An on-line training procedure is developed to adapt the changes in the system dynamics and improve the identification accuracy. In this procedure, a first-in first-out stack is used to store a certain history of the input-output data. The training is performed over the whole data in the stack at every stage. To speed up the convergence rate and enhance the accuracy for achieving the on-line learning, the Levenberg-Marquardt optimization method with a trust region approach is adapted to train the NNs. The objective of the second phase is to develop intelligent flight controllers. A NN-based adaptive PID control scheme that is composed of an emulator NN, an estimator NN, and a discrete time PID controller is developed. The emulator NN is used to calculate the system Jacobian required to train the estimator NN. The estimator NN, which is trained on-line by propagating the output error through the emulator, is used to adjust the PID gains. The NN-based adaptive PID control system is applied to control three angular rates of the nonlinear F-16 model. The body-axis pitch, roll, and yaw rates are fed back via the PID controllers to the elevator, aileron, and rudder actuators, respectively. The resulting control system has learning, adaptation, and fault-tolerant abilities. It avoids the storage and interpolation requirements for the too many controller parameters of a typical flight control

  6. An experimental study of concurrent methods for adaptively controlling vertical tail buffet in high performance aircraft

    NASA Astrophysics Data System (ADS)

    Roberts, Patrick J.

    High performance twin-tail aircraft, like the F-15 and F/A-18, encounter a condition known as tail buffet. At high angles of attack, vortices are generated at the wing fuselage interface (shoulder) or other leading edge extensions. These vortices are directed toward the twin vertical tails. When the flow interacts with the vertical tail it creates pressure variations that can oscillate the vertical tail assembly. This results in fatigue cracks in the vertical tail assembly that can decrease the fatigue life and increase maintenance costs. Recently, an offset piezoceramic stack actuator was used on an F-15 wind tunnel model to control buffet induced vibrations at high angles of attack. The controller was based on the acceleration feedback control methods, In this thesis a procedure for designing the offset piezoceramic stack actuators is developed. This design procedure includes determining the quantity and type of piezoceramic stacks used in these actuators. The changes of stresses, in the vertical tail caused by these actuators during an active control, are investigated. In many cases, linear controllers are very effective in reducing vibrations. However, during flight, the natural frequencies of the vertical tail structural system changes as the airspeed increases. This in turn, reduces the effectiveness of a linear controller. Other causes such as the unmodeled dynamics and nonlinear effects due to debonds also reduce the effectiveness of linear controllers. In this thesis, an adaptive neural network is used to augment the linear controller to correct these effects.

  7. High-Performance Mars Ascent Propulsion Technologies with Adaptability to ISRU and Human Exploration

    NASA Astrophysics Data System (ADS)

    Trinidad, M. A.; Calvignac, J. G.; Lo, A. S.

    2012-06-01

    Northrop Grumman's novel, high performance Mars ascent propulsion system that provides maximized mission flexibility and provides a propulsion technology path to in-situ resource utilization (ISRU) as well as ascent vehicles for human exploration is summarized.

  8. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  9. High-Performance Reactive Fluid Flow Simulations Using Adaptive Mesh Refinement on Thousands of Processors

    NASA Astrophysics Data System (ADS)

    Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.

    We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.

  10. Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas

    2001-01-01

    Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.

  11. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  12. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  13. High performance computing for deformable image registration: towards a new paradigm in adaptive radiotherapy.

    PubMed

    Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D

    2008-08-01

    The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is

  14. DSP-based adaptive backstepping using the tracking errors for high-performance sensorless speed control of induction motor drive.

    PubMed

    Zaafouri, Abderrahmen; Ben Regaya, Chiheb; Ben Azza, Hechmi; Châari, Abdelkader

    2016-01-01

    This paper presents a modified structure of the backstepping nonlinear control of the induction motor (IM) fitted with an adaptive backstepping speed observer. The control design is based on the backstepping technique complemented by the introduction of integral tracking errors action to improve its robustness. Unlike other research performed on backstepping control with integral action, the control law developed in this paper does not propose the increase of the number of system state so as not increase the complexity of differential equations resolution. The digital simulation and experimental results show the effectiveness of the proposed control compared to the conventional PI control. The results analysis shows the characteristic robustness of the adaptive control to disturbances of the load, the speed variation and low speed.

  15. High-performance liquid chromatography with diode-array detection cotinine method adapted for the assessment of tobacco smoke exposure.

    PubMed

    Bartolomé, Mónica; Gallego-Picó, Alejandrina; Huetos, Olga; Castaño, Argelia

    2014-06-01

    Smoking is considered to be one of the main risk factors for cancer and other diseases and is the second leading cause of death worldwide. As the anti-tobacco legislation implemented in Europe has reduced secondhand smoke exposure levels, analytical methods must be adapted to these new levels. Recent research has demonstrated that cotinine is the best overall discriminator when biomarkers are used to determine whether a person has ongoing exposure to tobacco smoke. This work proposes a sensitive, simple and low-cost method based on solid-phase extraction and liquid chromatography with diode array detection for the assessment of tobacco smoke exposure by cotinine determination in urine. The analytical procedure is simple and fast (20 min) when compared to other similar methods existing in the literature, and it is cheaper than the mass spectrometry techniques usually used to quantify levels in nonsmokers. We obtained a quantification limit of 12.30 μg/L and a recovery of over 90%. The linearity ranges used were 12-250 and 250-4000 μg/L. The method was successfully used to determine cotinine in urine samples collected from different volunteers and is clearly an alternative routine method that allows active and passive smokers to be distinguished.

  16. A High Performance, Cost-Effective, Open-Source Microscope for Scanning Two-Photon Microscopy that Is Modular and Readily Adaptable

    PubMed Central

    Rosenegger, David G.; Tran, Cam Ha T.; LeDue, Jeffery; Zhou, Ning; Gordon, Grant R.

    2014-01-01

    Two-photon laser scanning microscopy has revolutionized the ability to delineate cellular and physiological function in acutely isolated tissue and in vivo. However, there exist barriers for many laboratories to acquire two-photon microscopes. Additionally, if owned, typical systems are difficult to modify to rapidly evolving methodologies. A potential solution to these problems is to enable scientists to build their own high-performance and adaptable system by overcoming a resource insufficiency. Here we present a detailed hardware resource and protocol for building an upright, highly modular and adaptable two-photon laser scanning fluorescence microscope that can be used for in vitro or in vivo applications. The microscope is comprised of high-end componentry on a skeleton of off-the-shelf compatible opto-mechanical parts. The dedicated design enabled imaging depths close to 1 mm into mouse brain tissue and a signal-to-noise ratio that exceeded all commercial two-photon systems tested. In addition to a detailed parts list, instructions for assembly, testing and troubleshooting, our plan includes complete three dimensional computer models that greatly reduce the knowledge base required for the non-expert user. This open-source resource lowers barriers in order to equip more laboratories with high-performance two-photon imaging and to help progress our understanding of the cellular and physiological function of living systems. PMID:25333934

  17. Wind shear measuring on board an airliner

    NASA Technical Reports Server (NTRS)

    Krauspe, P.

    1984-01-01

    A measurement technique which continuously determines the wind vector on board an airliner during takeoff and landing is introduced. Its implementation is intended to deliver sufficient statistical background concerning low frequency wind changes in the atmospheric boundary layer and extended knowledge about deterministic wind shear modeling. The wind measurement scheme is described and the adaptation of apparatus onboard an A300 airbus is shown. Preliminary measurements made during level flight demonstrate the validity of the method.

  18. New On-board Microprocessors

    NASA Astrophysics Data System (ADS)

    Weigand, R.

    Two new processor devices have been developed for the use on board of spacecrafts. An 8-bit 8032-microcontroller targets typical controlling applications in instruments and sub-systems, or could be used as a main processor on small satellites, whereas the LEON 32-bit SPARC processor can be used for high performance controlling and data processing tasks. The ADV80S32 is fully compliant to the Intel 80x1 architecture and instruction set, extended by additional peripherals, 512 bytes on-chip RAM and a bootstrap PROM, which allows downloading the application software using the CCSDS PacketWire pro- tocol. The memory controller provides a de-multiplexed address/data bus, and allows to access up to 16 MB data and 8 MB program RAM. The peripherals have been de- signed for the specific needs of a spacecraft, such as serial interfaces compatible to RS232, PacketWire and TTC-B-01, counters/timers for extended duration and a CRC calculation unit accelerating the CCSDS TM/TC protocol. The 0.5 um Atmel manu- facturing technology (MG2RT) provides latch-up and total dose immunity; SEU fault immunity is implemented by using SEU hardened Flip-Flops and EDAC protection of internal and external memories. The maximum clock frequency of 20 MHz allows a processing power of 3 MIPS. Engineering samples are available. For SW develop- ment, various SW packages for the 8051 architecture are on the market. The LEON processor implements a 32-bit SPARC V8 architecture, including all the multiply and divide instructions, complemented by a floating-point unit (FPU). It includes several standard peripherals, such as timers/watchdog, interrupt controller, UARTs, parallel I/Os and a memory controller, allowing to use 8, 16 and 32 bit PROM, SRAM or memory mapped I/O. With on-chip separate instruction and data caches, almost one instruction per clock cycle can be reached in some applications. A 33-MHz 32-bit PCI master/target interface and a PCI arbiter allow operating the device in a plug-in card

  19. Modular on-board adaptive imaging

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Williams, D. S.

    1978-01-01

    Feature extraction involves the transformation of a raw video image to a more compact representation of the scene in which relevant information about objects of interest is retained. The task of the low-level processor is to extract object outlines and pass the data to the high-level process in a format that facilitates pattern recognition tasks. Due to the immense computational load caused by processing a 256x256 image, even a fast minicomputer requires a few seconds to complete this low-level processing. It is, therefore, necessary to consider hardware implementation of these low-level functions to achieve real-time processing speeds. The considered project had the objective to implement a system in which the continuous feature extraction process is not affected by the dynamic changes in the scene, varying lighting conditions, or object motion relative to the cameras. Due to the high bandwidth (3.5 MHz) and serial nature of the TV data, a pipeline processing scheme was adopted as the overall architecture of this system. Modularity in the system is achieved by designing circuits that are generic within the overall system.

  20. On-Board Chemical Propulsion Technology

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    On-board propulsion functions include orbit insertion, orbit maintenance, constellation maintenance, precision positioning, in-space maneuvering, de-orbiting, vehicle reaction control, planetary retro, and planetary descent/ascent. This paper discusses on-board chemical propulsion technology, including bipropellants, monopropellants, and micropropulsion. Bipropellant propulsion has focused on maximizing the performance of Earth storable propellants by using high-temperature, oxidation-resistant chamber materials. The performance of bipropellant systems can be increased further, by operating at elevated chamber pressures and/or using higher energy oxidizers. Both options present system level difficulties for spacecraft, however. Monopropellant research has focused on mixtures composed of an aqueous solution of hydroxl ammonium nitrate (HAN) and a fuel component. HAN-based monopropellants, unlike hydrazine, do not present a vapor hazard and do not require extraordinary procedures for storage, handling, and disposal. HAN-based monopropellants generically have higher densities and lower freezing points than the state-of-art hydrazine and can higher performance, depending on the formulation. High-performance HAN-based monopropellants, however, have aggressive, high-temperature combustion environments and require advances in catalyst materials or suitable non-catalytic ignition options. The objective of the micropropulsion technology area is to develop low-cost, high-utility propulsion systems for the range of miniature spacecraft and precision propulsion applications.

  1. The TWINS Instrument On Board Mars Insight Mission

    NASA Astrophysics Data System (ADS)

    Velasco, Tirso; Rodríguez-Manfredi, Jose A.

    2015-04-01

    The aim of this paper is to present the TWINS (Temperature and Wind sensors for INSight mission) instrument developed for the JPL Mars Insight Mission, to be launched by JPL in 2016. TWINS will provide high performance wind and and air temperature measurements for the mission platform TWINS is based on the heritage from REMS (Rover Environmental Monitoring Station) on board Curiosity Rover, which has been working successfully on Mars surface since August 2012. The REMS Booms Spare Hardware, comprising the Wind and Temperature Sensors, have been refurbished into TWINS Booms, with enhanced performances in terms of dynamic range and resolution. Its short-term development time and low cost, have shown the capability of REMS design and technologies developed for Curiosity to be adapted to a new mission and new scientific requirements, with increased performances. It is also an example of international cooperation in Planetary Missions that has been carried out in the frame of science instrments within Curiosity and InSight Missions.

  2. A novel on-board switch scheme based on OFDM

    NASA Astrophysics Data System (ADS)

    Dang, Jun-Hong; Zhou, Po; Cao, Zhi-Gang

    2009-12-01

    OFDM is a new focused technology in satellite communication. This paper proposed a novel OFDM based on-board switching technology which has high spectrum efficiency and adaptability and supports the integration of terrestrial wireless communication systems and satellite communication systems. Then it introduced a realization scheme of this technology, and proposed the main problems to be solved and the relevant solutions of them.

  3. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  4. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  5. Tailored Assemblies of Rod-Coil Poly(3-hexylthiophene)-b-Polystyrene Diblock Copolymers: Adaptable Building Blocks for High-Performance Organic Field-Effect Transistors

    SciTech Connect

    Xiao, Kai; Yu, Xiang; Chen, Jihua; Lavrik, Nickolay V; Hong, Kunlun; Sumpter, Bobby; Geohegan, David B

    2011-01-01

    The self-assembly process and resulting structure of a series of conductive diblock copolymer thin films of Poly(3-hexylthiophene)-b-Polystyrene (P3HT-b-PS) have been studied by TEM, SAED, GIXD and AFM and additionally by first principles modeling and simulation. By varying the molecular weight of the P3HT segment, these block copolymers undergo microphase separation and self-assemble into nanostructured sphere, lamellae, nanofiber, and nanoribbon in the films. Within the diblock copolymer thin film, the convalently bonded PS blocks segregated to form amorphous domains, however, the conductive P3HT blocks were crystalline, exhibiting highly-ordered molecular packing with their alkyl side chains aligned along to the normal to the substrate and the - stacking direction of the thiophene rings aligned parallel to the substrate. The conductive P3HY block copolymers exhibited significant improvements in organic feild-effect transistor (OFET) performance and environmental stability as compared to P3HT homopolymers, with up to a factor of two increase in measured moblity (0.08 cm2/Vs ) for the P4 (85 wt% P3HT). Overall, this work demonstrates that the high degree of molecular order induced by bock copolymer phase separation can improve the transport properties and stability of conductive polymer critical for high-performance OFET s.

  6. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  7. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  8. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  9. On-Board Mining in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans

  10. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  11. HypsIRI On-Board Science Data Processing

    NASA Technical Reports Server (NTRS)

    Flatley, Tom

    2010-01-01

    Topics include On-board science data processing, on-board image processing, software upset mitigation, on-board data reduction, on-board 'VSWIR" products, HyspIRI demonstration testbed, and processor comparison.

  12. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  13. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  14. High performance satellite networks

    NASA Astrophysics Data System (ADS)

    Helm, Neil R.; Edelson, Burton I.

    1997-06-01

    The high performance satellite communications networks of the future will have to be interoperable with terrestrial fiber cables. These satellite networks will evolve from narrowband analogue formats to broadband digital transmission schemes, with protocols, algorithms and transmission architectures that will segment the data into uniform cells and frames, and then transmit these data via larger and more efficient synchronous optional (SONET) and asynchronous transfer mode (ATM) networks that are being developed for the information "superhighway". These high performance satellite communications and information networks are required for modern applications, such as electronic commerce, digital libraries, medical imaging, distance learning, and the distribution of science data. In order for satellites to participate in these information superhighway networks, it is essential that they demonstrate their ability to: (1) operate seamlessly with heterogeneous architectures and applications, (2) carry data at SONET rates with the same quality of service as optical fibers, (3) qualify transmission delay as a parameter not a problem, and (4) show that satellites have several performance and economic advantages over fiber cable networks.

  15. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  16. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  17. On-board Data Mining

    NASA Astrophysics Data System (ADS)

    Tanner, Steve; Stein, Cara; Graves, Sara J.

    Networks of remote sensors are becoming more common as technology improves and costs decline. In the past, a remote sensor was usually a device that collected data to be retrieved at a later time by some other mechanism. This collected data were usually processed well after the fact at a computer greatly removed from the in situ sensing location. This has begun to change as sensor technology, on-board processing, and network communication capabilities have increased and their prices have dropped. There has been an explosion in the number of sensors and sensing devices, not just around the world, but literally throughout the solar system. These sensors are not only becoming vastly more sophisticated, accurate, and detailed in the data they gather but they are also becoming cheaper, lighter, and smaller. At the same time, engineers have developed improved methods to embed computing systems, memory, storage, and communication capabilities into the platforms that host these sensors. Now, it is not unusual to see large networks of sensors working in cooperation with one another. Nor does it seem strange to see the autonomous operation of sensorbased systems, from space-based satellites to smart vacuum cleaners that keep our homes clean and robotic toys that help to entertain and educate our children. But access to sensor data and computing power is only part of the story. For all the power of these systems, there are still substantial limits to what they can accomplish. These include the well-known limits to current Artificial Intelligence capabilities and our limited ability to program the abstract concepts, goals, and improvisation needed for fully autonomous systems. But it also includes much more basic engineering problems such as lack of adequate power, communications bandwidth, and memory, as well as problems with the geolocation and real-time georeferencing required to integrate data from multiple sensors to be used together.

  18. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  19. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  20. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  1. Intelligent On-Board Processing in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.

    2005-12-01

    Most existing sensing systems are designed as passive, independent observers. They are rarely aware of the phenomena they observe, and are even less likely to be aware of what other sensors are observing within the same environment. Increasingly, intelligent processing of sensor data is taking place in real-time, using computing resources on-board the sensor or the platform itself. One can imagine a sensor network consisting of intelligent and autonomous space-borne, airborne, and ground-based sensors. These sensors will act independently of one another, yet each will be capable of both publishing and receiving sensor information, observations, and alerts among other sensors in the network. Furthermore, these sensors will be capable of acting upon this information, perhaps altering acquisition properties of their instruments, changing the location of their platform, or updating processing strategies for their own observations to provide responsive information or additional alerts. Such autonomous and intelligent sensor networking capabilities provide significant benefits for collections of heterogeneous sensors within any environment. They are crucial for multi-sensor observations and surveillance, where real-time communication with external components and users may be inhibited, and the environment may be hostile. In all environments, mission automation and communication capabilities among disparate sensors will enable quicker response to interesting, rare, or unexpected events. Additionally, an intelligent network of heterogeneous sensors provides the advantage that all of the sensors can benefit from the unique capabilities of each sensor in the network. The University of Alabama in Huntsville (UAH) is developing a unique approach to data processing, integration and mining through the use of the Adaptive On-Board Data Processing (AODP) framework. AODP is a key foundation technology for autonomous internetworking capabilities to support situational awareness by

  2. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  3. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  4. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    SciTech Connect

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  5. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  6. 47 CFR 80.1179 - On-board repeater limitations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false On-board repeater limitations. 80.1179 Section... On-board repeater limitations. When an on-board repeater is used, the following limitations must be met: (a) The on-board repeater antenna must be located no higher than 3 meters (10 feet) above...

  7. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  8. On-Board Propulsion System Analysis of High Density Propellants

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    1998-01-01

    The impact of the performance and density of on-board propellants on science payload mass of Discovery Program class missions is evaluated. A propulsion system dry mass model, anchored on flight-weight system data from the Near Earth Asteroid Rendezvous mission is used. This model is used to evaluate the performance of liquid oxygen, hydrogen peroxide, hydroxylammonium nitrate, and oxygen difluoride oxidizers with hydrocarbon and metal hydride fuels. Results for the propellants evaluated indicate that the state-of-art, Earth Storable propellants with high performance rhenium engine technology in both the axial and attitude control systems has performance capabilities that can only be exceeded by liquid oxygen/hydrazine, liquid oxygen/diborane and oxygen difluoride/diborane propellant combinations. Potentially lower ground operations costs is the incentive for working with nontoxic propellant combinations.

  9. On-board attitude determination for the Explorer Platform satellite

    NASA Technical Reports Server (NTRS)

    Jayaraman, C.; Class, B.

    1992-01-01

    This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.

  10. On-Board Training for US Payloads

    NASA Technical Reports Server (NTRS)

    Murphy, Benjamin; Meacham, Steven (Technical Monitor)

    2001-01-01

    The International Space Station (ISS) crew follows a training rotation schedule that puts them in the United States about every three months for a three-month training window. While in the US, the crew receives training on both ISS systems and payloads. Crew time is limited, and system training takes priority over payload training. For most flights, there is sufficient time to train all systems and payloads. As more payloads are flown, training time becomes a more precious resource. Less training time requires payload developers (PDs) to develop alternatives to traditional ground training. To ensure their payloads have sufficient training to achieve their scientific goals, some PDs have developed on-board trainers (OBTs). These OBTs are used to train the crew when no or limited ground time is available. These lessons are also available on-orbit to refresh the crew about their ground training, if it was available. There are many types of OBT media, such as on-board computer based training (OCBT), video/photo lessons, or hardware simulators. The On-Board Training Working Group (OBTWG) and Courseware Development Working Group (CDWG) are responsible for developing the requirements for the different types of media.

  11. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  12. Effective "on-boarding": transitioning from trainee to faculty.

    PubMed

    Gustin, Jillian; Tulsky, James A

    2010-10-01

    Abstract The transition from trainee to junior faculty member can be both exciting and daunting. However, a paucity of medical literature exists to help guide new faculty in this transition. Therefore, we adapted work from the business management literature on what is referred to as "on-boarding"; effectively integrating and advancing one's position as a new employee. This article outlines strategies for cultivating one's own on-boarding as a junior faculty member at large academic medical centers. These strategies are extrapolated from management practices, culled from the medical literature on developing and retaining junior faculty, and, finally, borrowed from the hard-won knowledge of junior and senior faculty members. They advise new faculty to: (1) start early, (2) define your role--"managing yourself," (3) invest in/secure early wins, (4) manage your manager, (5) identify the "true (or hidden)" organizational culture, (6) reassess your own goals--"look in the rearview mirror and to the horizon," and (7) use your mentors effectively. These strategies provide a roadmap for new faculty members to transition as effectively as possible to their new jobs.

  13. Effective "on-boarding": transitioning from trainee to faculty.

    PubMed

    Gustin, Jillian; Tulsky, James A

    2010-10-01

    Abstract The transition from trainee to junior faculty member can be both exciting and daunting. However, a paucity of medical literature exists to help guide new faculty in this transition. Therefore, we adapted work from the business management literature on what is referred to as "on-boarding"; effectively integrating and advancing one's position as a new employee. This article outlines strategies for cultivating one's own on-boarding as a junior faculty member at large academic medical centers. These strategies are extrapolated from management practices, culled from the medical literature on developing and retaining junior faculty, and, finally, borrowed from the hard-won knowledge of junior and senior faculty members. They advise new faculty to: (1) start early, (2) define your role--"managing yourself," (3) invest in/secure early wins, (4) manage your manager, (5) identify the "true (or hidden)" organizational culture, (6) reassess your own goals--"look in the rearview mirror and to the horizon," and (7) use your mentors effectively. These strategies provide a roadmap for new faculty members to transition as effectively as possible to their new jobs. PMID:20942762

  14. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  15. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  16. High-Performance Ball Bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Roger W., Jr.; Haluck, David A.; Olinger, John B.; Owen, Samuel S.; Poole, William E.

    1995-01-01

    High-performance bearing features strong, lightweight, self-lubricating cage with self-lubricating liners in ball apertures. Designed to operate at high speed (tens of thousands of revolutions per minute) in cryogenic environment like liquid-oxygen or liquid-hydrogen turbopump. Includes inner race, outer race, and cage keeping bearing balls equally spaced.

  17. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  18. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  19. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  20. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  1. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  2. Flight experiences on board Space Station Mir

    NASA Astrophysics Data System (ADS)

    Viehboeck, Franz

    1992-07-01

    A survey of the training in the cosmonaut center 'Yuri Gagarin' near Moscow (U.S.S.R.) and of the preparation for the joint Soviet-Austrian space flight from 2-10 Oct. 1991 is given. The flight in Soyuz-TM 13 with the most important systems, as well as a short description of the Space Station Mir, the life on board the Station with the basic systems, like energy supply, life support, radio, and television are described. The possibilities of exploitation of the Space Station Mir and an outlook to the future is given.

  3. On-board processing for telecommunications satellites

    NASA Technical Reports Server (NTRS)

    Nuspl, P. P.; Dong, G.

    1991-01-01

    In this decade, communications satellite systems will probably face dramatic challenges from alternative transmission means. To balance and overcome such competition, and to prepare for new requirements, INTELSAT has developed several on-board processing techniques, including Satellite-Switched TDMA (SS-TDMA), Satellite-Switched FDMA (SS-FDMA), several Modulators/Demodulators (Modem), a Multicarrier Multiplexer and Demodulator MCDD), an International Business Service (IBS)/Intermediate Data Rate (IDR) BaseBand Processor (BBP), etc. Some proof-of-concept hardware and software were developed, and tested recently in the INTELSAT Technical Laboratories. These techniques and some test results are discussed.

  4. On-Board Entry Trajectory Planning Expanded to Sub-orbital Flight

    NASA Technical Reports Server (NTRS)

    Lu, Ping; Shen, Zuojun

    2003-01-01

    A methodology for on-board planning of sub-orbital entry trajectories is developed. The algorithm is able to generate in a time frame consistent with on-board environment a three-degree-of-freedom (3DOF) feasible entry trajectory, given the boundary conditions and vehicle modeling. This trajectory is then tracked by feedback guidance laws which issue guidance commands. The current trajectory planning algorithm complements the recently developed method for on-board 3DOF entry trajectory generation for orbital missions, and provides full-envelope autonomous adaptive entry guidance capability. The algorithm is validated and verified by extensive high fidelity simulations using a sub-orbital reusable launch vehicle model and difficult mission scenarios including failures and aborts.

  5. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  6. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  7. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  8. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  9. Toward High-Performance Organizations.

    ERIC Educational Resources Information Center

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  10. High performance storable propellant resistojet

    NASA Astrophysics Data System (ADS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  11. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  12. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  13. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  14. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  15. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  16. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  17. Reference Architecture for High Dependability On-Board Computers

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Esper, Alexandre; Zandin, Johan; Barbosa, Ricardo; Monteleone, Claudio

    2014-08-01

    The industrial process in the area of on-board computers is characterized by small production series of on-board computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well-defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of on-board computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.

  18. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  19. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  20. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  1. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  2. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  3. Vibration on board and health effects.

    PubMed

    Jensen, Anker; Jepsen, Jørgen Riis

    2014-01-01

    There is only limited knowledge of the exposure to vibrations of ships' crews and their risk of vibration-induced health effects. Exposure to hand-arm vibrations from the use of vibrating tools at sea does not differ from that in the land-based trades. However, in contrast to most other work places, seafarers are also exposed to vibrations to the feet when standing on vibrating surfaces on board. Anecdotal reports have related the development of "white feet" to local exposure to vibration, e.g. in mining, but this connection has not been investigated in the maritime setting. As known from studies of the health consequences of whole body vibrations in land-transportation, such exposure at sea may affect ships' passengers and crews. While the relation of back disorders to high levels of whole body vibration has been demonstrated among e.g. tractor drivers, there are no reported epidemiological evidence for such relation among seafarers except for fishermen, who, however, are also exposed to additional recognised physical risk factors at work. The assessment and reduction of vibrations by naval architects relates to technical implications of this impact for the ships' construction, but has limited value for the estimation of health risks because they express the vibration intensity differently that it is done in a medical context. PMID:25231326

  4. 49 CFR 1018.5 - Monetary limitation on Board authority.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 8 2014-10-01 2014-10-01 false Monetary limitation on Board authority. 1018.5... Coverage § 1018.5 Monetary limitation on Board authority. The Board's authority to compromise a claim or to... collection action; and (b) Do not exceed $100,000, exclusive of interest, penalties, and administrative...

  5. 49 CFR 1018.5 - Monetary limitation on Board authority.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Monetary limitation on Board authority. 1018.5... Coverage § 1018.5 Monetary limitation on Board authority. The Board's authority to compromise a claim or to... collection action; and (b) Do not exceed $100,000, exclusive of interest, penalties, and administrative...

  6. 49 CFR 1018.5 - Monetary limitation on Board authority.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 8 2011-10-01 2011-10-01 false Monetary limitation on Board authority. 1018.5... Coverage § 1018.5 Monetary limitation on Board authority. The Board's authority to compromise a claim or to... collection action; and (b) Do not exceed $100,000, exclusive of interest, penalties, and administrative...

  7. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Liability on Board's claim. 341.7 Section 341.7 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD UNEMPLOYMENT INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A...

  8. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  9. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  10. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  11. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  12. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  13. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  14. File-Based Operations and CFDP On-Board Implementation

    NASA Astrophysics Data System (ADS)

    Herrera Alzu, Ignacio; Peran Mazon, Francisco; Gonzalo Palomo, Alfonso

    2014-08-01

    Since several years ago, there is an increasing interest among the space agencies, ESA in particular, in deploying File-based Operations (FbO) for Space missions. This aims at simplifying, from the Ground Segment's perspective, the access to the Space Segment and ultimately the overall operations. This is particularly important for deep Space missions, where the Ground-Space interaction can become too complex to handle just with traditional packet-based services. The use of a robust protocol for transferring files between Ground and Space is a key for the FbO approach, and the CCSDS File Delivery Protocol (CFDP) is nowadays the main candidate for doing this job. Both Ground and Space Segments need to be adapted for FbO, being the Ground Segment naturally closer to this concept. This paper focusses on the Space Segment. The main implications related to FbO/CFDP, the possible on-board implementations and the foreseen operations are described. The case of Euclid, the first ESA mission to be file-based operated with CFDP, is also analysed.

  15. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  16. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  17. The Experiment CPLM (Comportamiento De Puentes Líquidos En Microgravedad) On Board MINISAT 01

    NASA Astrophysics Data System (ADS)

    Sanz-Andrés, Angel; Rodríguez-De-Francisco, Pablo; Santiago-Prowald, Julián

    2001-03-01

    The Universidad Politécnica de Madrid participates in the MINISAT 01 program as the experiment CPLM responsible. This experiment aims at the study of the fluid behaviour in reduced gravity conditions. The interest of this study is and has been widely recognised by the scientific community and has potential applications in the pharmaceutical and microelectronic technologies (crystal growth), among others. The scientific team which has developed the CPLM experiment has a wide experience in this field and had participate in the performance of a large number of experiments on the fluid behaviour in reduced gravity conditions in flight (Spacelab missions, TEXUS sounding rockets, KC-135 and Caravelle aeroplanes, drop towers, as well as on earth labs (neutral buoyancy and small scale simulations). The experimental equipment used in CPLMis a version of the payload developed for experimentation on drop towers and on board microsatellites as the UPM-Sat 1, adapted to fly on board MINISAT 01.

  18. On-board congestion control for satellite packet switching networks

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.

    1991-01-01

    It is desirable to incorporate packet switching capability on-board for future communication satellites. Because of the statistical nature of packet communication, incoming traffic fluctuates and may cause congestion. Thus, it is necessary to incorporate a congestion control mechanism as part of the on-board processing to smooth and regulate the bursty traffic. Although there are extensive studies on congestion control for both baseband and broadband terrestrial networks, these schemes are not feasible for space based switching networks because of the unique characteristics of satellite link. Here, we propose a new congestion control method for on-board satellite packet switching. This scheme takes into consideration the long propagation delay in satellite link and takes advantage of the the satellite's broadcasting capability. It divides the control between the ground terminals and satellite, but distributes the primary responsibility to ground terminals and only requires minimal hardware resource on-board satellite.

  19. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  20. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  1. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  2. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  3. High performance dosimetry calculations using adapted ray-tracing

    NASA Astrophysics Data System (ADS)

    Perrotte, Lancelot; Saupin, Guillaume

    2010-11-01

    When preparing interventions on nuclear sites, it is interesting to study different scenarios, to identify the most appropriate one for the operator(s). Using virtual reality tools is a good way to simulate the potential scenarios. Thus, taking advantage of very efficient computation times can help the user studying different complex scenarios, by immediately evaluating the impact of any changes. In the field of radiation protection, people often use computation codes based on the straight line attenuation method with build-up factors. As for other approaches, geometrical computations (finding all the interactions between radiation rays and the scene objects) remain the bottleneck of the simulation. We present in this paper several optimizations used to speed up these geometrical computations, using innovative GPU ray-tracing algorithms. For instance, we manage to compute every intersectionbetween 600 000 rays and a huge 3D industrial scene in a fraction of second. Moreover, our algorithm works the same way for both static and dynamic scenes, allowing easier study of complex intervention scenarios (where everything moves: the operator(s), the shielding objects, the radiation sources).

  4. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    The feasibility was investigated of an on-board earth resources data processor launched during the 1980-1990 time frame. Projected user applications were studied to define the data formats and the information extraction algorithms that the processor must execute. Based on these constraints, and the constraints imposed by the available technology, on-board processor systems were designed and their feasibility evaluated. Conclusions and recommendations are given.

  5. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  6. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  7. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  8. F-8 DFBW on-board electronics

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The Apollo hardware jammed into the F-8C. The computer is partially visible in the avionics bay at the top of the fuselage behind the cockpit. Note the display and keyboard unit in the gun bay. To carry the computers and other equipment, the F-8 DFBW team removed the aircraft's guns and ammunition boxes. The F-8 Digital Fly-By-Wire (DFBW) flight research project validated the principal concepts of all-electric flight control systems now used on nearly all modern high-performance aircraft and on military and civilian transports. The first flight of the 13-year project was on May 25, 1972, with research pilot Gary E. Krier at the controls of a modified F-8C Crusader that served as the testbed for the fly-by-wire technologies. The project was a joint effort between the NASA Flight Research Center, Edwards, California, (now the Dryden Flight Research Center) and Langley Research Center. It included a total of 211 flights. The last flight was December 16, 1985, with Dryden research pilot Ed Schneider at the controls. The F-8 DFBW system was the forerunner of current fly-by-wire systems used in the space shuttles and on today's military and civil aircraft to make them safer, more maneuverable, and more efficient. Electronic fly-by-wire systems replaced older hydraulic control systems, freeing designers to design aircraft with reduced in-flight stability. Fly-by-wire systems are safer because of their redundancies. They are more maneuverable because computers can command more frequent adjustments than a human pilot can. For airliners, computerized control ensures a smoother ride than a human pilot alone can provide. Digital-fly-by-wire is more efficient because it is lighter and takes up less space than the hydraulic systems it replaced. This either reduces the fuel required to fly or increases the number of passengers or pounds of cargo the aircraft can carry. Digital fly-by-wire is currently used in a variety of aircraft ranging from F/A-18 fighters to the Boeing 777

  9. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E; Pollard, Martin J; Elkin, Christopher J

    2005-10-11

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

  10. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

    2006-12-12

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

  11. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  12. Properties Of High-Performance Thermoplastics

    NASA Technical Reports Server (NTRS)

    Johnston, Norman J.; Hergenrother, Paul M.

    1992-01-01

    Report presents review of principal thermoplastics (TP's) used to fabricate high-performance composites. Sixteen principal TP's considered as candidates for fabrication of high-performance composites presented along with names of suppliers, Tg, Tm (for semicrystalline polymers), and approximate maximum processing temperatures.

  13. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  14. Fault Tolerance and COTS: Next Generation of High Performance Satellite Computers

    NASA Astrophysics Data System (ADS)

    Behr, P.; Bärwald, W.; Brieß, K.; Montenegro, S.

    The increasing complexity of future satellite missions requires adequately powerful on- board computer systems. The obvious performance gap between state-of-the-art micro- processor technology ("commercial-off-the-shelf", COTS) and available radiation hard components already impedes the realization of innovative satellite applications requiring high performance on-board data processing. In the paper we emphasize the advantages of the COTS approach for future OBCS and we show why we are convinced that this approach is feasible. We present the architecture of the fault tolerant control computer of the BIRD satellite and finally we show some results of the BIRD mission after 20 months in orbit, especially the experience with its COTS based control computer.

  15. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  16. High Performance Diesel Fueled Cabin Heater

    SciTech Connect

    Butcher, Tom

    2001-08-05

    Recent DOE-OHVT studies show that diesel emissions and fuel consumption can be greatly reduced at truck stops by switching from engine idle to auxiliary-fired heaters. Brookhaven National Laboratory (BNL) has studied high performance diesel burner designs that address the shortcomings of current low fire-rate burners. Initial test results suggest a real opportunity for the development of a truly advanced truck heating system. The BNL approach is to use a low pressure, air-atomized burner derived form burner designs used commonly in gas turbine combustors. This paper reviews the design and test results of the BNL diesel fueled cabin heater. The burner design is covered by U.S. Patent 6,102,687 and was issued to U.S. DOE on August 15, 2000.The development of several novel oil burner applications based on low-pressure air atomization is described. The atomizer used is a pre-filming, air blast nozzle of the type commonly used in gas turbine combustion. The air pressure used can b e as low as 1300 Pa and such pressure can be easily achieved with a fan. Advantages over conventional, pressure-atomized nozzles include ability to operate at low input rates without very small passages and much lower fuel pressure requirements. At very low firing rates the small passage sizes in pressure swirl nozzles lead to poor reliability and this factor has practically constrained these burners to firing rates over 14 kW. Air atomization can be used very effectively at low firing rates to overcome this concern. However, many air atomizer designs require pressures that can be achieved only with a compressor, greatly complicating the burner package and increasing cost. The work described in this paper has been aimed at the practical adaptation of low-pressure air atomization to low input oil burners. The objective of this work is the development of burners that can achieve the benefits of air atomization with air pressures practically achievable with a simple burner fan.

  17. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  18. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  19. Three-dimensional optical lines fabricated onto substrate for on-board interconnection

    NASA Astrophysics Data System (ADS)

    Matsubara, Takahiro; Oda, Keiko; Watanabe, Keiichiro; Maetani, Maraki; Tanaka, Kaori; Tanahashi, Shigeo

    2009-02-01

    Optical lines using polymer materials fabricated on an organic substrate with metal lines and pads are proposed to realize fully optical interconnections among high performance LSIs. This optical line enable transmit high speed optical signals not only on a plane surface but to vertical direction. It has following four particular portions; (1) Curved parallel optical waveguide; (2) 45 degree reflection mirror; (3) Optical via hole with coaxial structure; (4) Optical joint between package and board. The optical line characterized by transmission loss and passed through eye diagram, and good optical signal transmission is confirmed to really use for optical interconnection between LSIs. Then on-board optical signal transmission is demonstrated by that VCSEL and PIN-PD are assembled using flip-chip technology on a circuit board with other electric devices of driving circuit, and also package-to-board optical joint are demonstrated by passing through solder reflow process.

  20. Digital tomosynthesis with an on-board kilovoltage imaging device

    SciTech Connect

    Godfrey, Devon J. . E-mail: devon.godfrey@duke.edu; Yin, F.-F.; Oldham, Mark; Yoo, Sua; Willett, Christopher

    2006-05-01

    Purpose: To generate on-board digital tomosynthesis (DTS) and reference DTS images for three-dimensional image-guided radiation therapy (IGRT) as an alternative to conventional portal imaging or on-board cone-beam computed tomography (CBCT). Methods and Materials: Three clinical cases (prostate, head-and-neck, and liver) were selected to illustrate the capabilities of on-board DTS for IGRT. Corresponding reference DTS images were reconstructed from digitally reconstructed radiographs computed from planning CT image sets. The effect of scan angle on DTS slice thickness was examined by computing the mutual information between coincident CBCT and DTS images, as the DTS scan angle was varied from 0{sup o} to 165{sup o}. A breath-hold DTS acquisition strategy was implemented to remove respiratory motion artifacts. Results: Digital tomosynthesis slices appeared similar to coincident CBCT planes and yielded substantially more anatomic information than either kilovoltage or megavoltage radiographs. Breath-hold DTS acquisition improved soft-tissue visibility by suppressing respiratory motion. Conclusions: Improved bony and soft-tissue visibility in DTS images is likely to improve target localization compared with radiographic verification techniques and might allow for daily localization of a soft-tissue target. Breath-hold DTS is a potential alternative to on-board CBCT for sites prone to respiratory motion.

  1. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the on-board to off-board communications protocol. All emission related messages sent to the scan tool... equipment used to interface, extract and display OBD-related information shall meet SAE J1978 “OBD II Scan... demonstrating compliance. In addition, demonstration of compliance with 13 CCR 1968.2(e)(16.2.1)(C), to...

  2. Intelligent Sensors and Components for On-Board ISHM

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge; Morris, Jon; Nickles, Donald; Schmalzel, Jorge; Rauth, David; Mahajan, Ajay; Utterbach, L.; Oesch, C.

    2006-01-01

    A viewgraph presentation on the development of intelligent sensors and components for on-board Integrated Systems Health Health Management (ISHM) is shown. The topics include: 1) Motivation; 2) Integrated Systems Health Management (ISHM); 3) Intelligent Components; 4) IEEE 1451; 5)Intelligent Sensors; 6) Application; and 7) Future Directions

  3. Some design considerations for high-performance infrared imaging seeker

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Huang, Jianxiong

    2015-10-01

    In recent years, precision guided weapons play more and more important role in modern war. The development and applications of infrared imaging guidance technology have been paid more and more attention. And with the increasing of the complexity of mission and environment, precision guided weapons make stricter demand for infrared imaging seeker. The demands for infrared imaging seeker include: high detection sensitivity, large dynamic range, having better target recognition capability, having better anti-jamming capability and better environment adaptability. To meet the strict demand of weapon system, several important issues should be considered in high-performance infrared imaging seeker design. The mission, targets, environment of infrared imaging guided missile must be regarded. The tradeoff among performance goal, design parameters, infrared technology constraints and missile constraints should be considered. The optimized application of IRFPA and ATR in complicated environment should be concerned. In this paper, some design considerations for high-performance infrared imaging seeker were discussed.

  4. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  5. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  6. System analysis of high performance MHD systems

    SciTech Connect

    Chang, S.L.; Berry, G.F.; Hu, N.

    1988-01-01

    This paper presents the results of an investigation on the upper ranges of performance that an MHD power plant using advanced technology assumptions might achieve and a parametric study on the key variables affecting this high performance. To simulate a high performance MHD power plant and conduct a parametric study, the Systems Analysis Language Translator (SALT) code developed at Argonne National Laboratory was used. The parametric study results indicate that the overall efficiency of an MHD power plant can be further increased subject to the improvement of some key variables such as, the MHD generator inverter efficiency, channel electrical loading factor, magnetic field strength, preheated air temperature, and combustor heat loss. In an optimization calculation, the simulated high performance MHD power plant using advanced technology assumptions can attain an ultra high overall efficiency, exceeding 62%. 12 refs., 5 figs., 4 tabs.

  7. High Performance Computing with Harness over InfiniBand

    SciTech Connect

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  8. High performance protection circuit for power electronics applications

    SciTech Connect

    Tudoran, Cristian D. Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  9. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

    2009-02-03

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are further improvements to aspects of the hybrid magnetic structure, including additional elements and for adapting the use of the hybrid magnetic structure for use in biotechnology and high throughput processes.

  10. High performance protection circuit for power electronics applications

    NASA Astrophysics Data System (ADS)

    Tudoran, Cristian D.; Dǎdârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-01

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a "sensor" or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  11. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems.

  12. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  13. Team Development for High Performance Management.

    ERIC Educational Resources Information Center

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  14. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  15. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  16. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  17. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  18. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  19. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  20. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  1. Co-design for High Performance Computing

    NASA Astrophysics Data System (ADS)

    Rodrigues, Arun; Dosanjh, Sudip; Hemmert, Scott

    2010-09-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  2. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  3. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  4. On-Board Processor and Network Maturation for Ariane 6

    NASA Astrophysics Data System (ADS)

    Clavier, Rémi; Sautereau, Pierre; Sangaré, Jérémie; Disson, Benjamin

    2015-09-01

    In the past three years, innovative avionic technologies for Ariane 6 were evaluated in the tail of three main programs involving various stakeholders: FLPP (Future Launcher Preparatory Program, from ESA), AXE (Avionic-X European, formerly Avionique-X, French public R&T program) and CNES R&T program relying on industrial partnerships. In each avionics’ domain, several technologies were compared, analyzed and tested regarding space launchers system expectations and constraints. Within the frame of on-board data handling, two technologies have been identified as promising: ARM based microprocessors for the computing units and TTEthernet for the on-board network. This paper presents the main outcomes of the data handling preparatory activities performed on the AXE platform in Airbus Defence and Space - Les Mureaux.

  5. Octafluoropropane Concentration Dynamics on Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Perry, J. L.

    2003-01-01

    Since activating the International Space Station s (IS9 Service Module in November 2000, archival air quality samples have shown highly variable concentrations of octafluoropropane in the cabin. This variability has been directly linked to leakage from air conditioning systems on board the Service Module, Zvezda. While octafluoro- propane is not highly toxic, it presents a significant chal- lenge to the trace contaminant control systems. A discussion of octafluoropropane concentration dynamics is presented and the ability of on board trace contami- nant control systems to effectively remove octafluoropro- pane from the cabin atmosphere is assessed. Consideration is given to operational and logistics issues that may arise from octafluoropropane and other halo- carbon challenges to the contamination control systems as well as the potential for effecting cabin air quality.

  6. BOLERO: on board software library for space flight dynamics applications

    NASA Astrophysics Data System (ADS)

    Pontet, B.; Laurichesse, D.

    2002-07-01

    The BOLERO library ("Bibliothèque d'Objets Logiciels Embarqués pour la Restitution d'Orbite" or "on board object software library for orbit determination") gathers CNES skills in space flight dynamics for orbit determination applications. This library offers a set of objects and services that can be used to make applications as on board real time navigator that can be integrated in various targets such as an ERC32 calculator. The aim of this paper is to describe this library: its development process and the principles of its use. The space flight dynamics detailed aspects are out of the scope of this paper. First, the library content and the architecture are specified. Then the library is developed and tested on a target computer for qualification. An application (DIONE) is developed in order to test BOLERO performances. Then a "customised supply" is generated: it contains a subset of BOLERO services with the associated documentation.

  7. EMI Standards for Wireless Voice and Data on Board Aircraft

    NASA Technical Reports Server (NTRS)

    Ely, Jay J.; Nguyen, Truong X.

    2002-01-01

    The use of portable electronic devices (PEDs) on board aircraft continues to be an increasing source of misunderstanding between passengers and flight-crews, and consequently, an issue of controversy between wireless product manufacturers and air transport regulatory authorities. This conflict arises primarily because of the vastly different regulatory objectives between commercial product and airborne equipment standards for avoiding electromagnetic interference (EMI). This paper summarizes international regulatory limits and test processes for measuring spurious radiated emissions from commercially available PEDs, and compares them to international standards for airborne equipment. The goal is to provide insight for wireless product developers desiring to extend the freedom of their customers to use wireless products on-board aircraft, and to identify future product characteristics, test methods and technologies that may facilitate improved wireless freedom for airline passengers.

  8. On-Board Switching and Routing Advanced Technology Study

    NASA Technical Reports Server (NTRS)

    Yegenoglu, F.; Inukai, T.; Kaplan, T.; Redman, W.; Mitchell, C.

    1998-01-01

    Future satellite communications is expected to be fully integrated into National and Global Information Infrastructures (NII/GII). These infrastructures will carry multi gigabit-per-second data rates, with integral switching and routing of constituent data elements. The satellite portion of these infrastructures must, therefore, be more than pipes through the sky. The satellite portion will also be required to perform very high speed routing and switching of these data elements to enable efficient broad area coverage to many home and corporate users. The technology to achieve the on-board switching and routing must be selected and developed specifically for satellite application within the next few years. This report presents evaluation of potential technologies for on-board switching and routing applications.

  9. Autonomous On-Board Calibration of Attitude Sensors and Gyros

    NASA Technical Reports Server (NTRS)

    Pittelkau, Mark E.

    2007-01-01

    This paper presents the state of the art and future prospects for autonomous real-time on-orbit calibration of gyros and attitude sensors. The current practice in ground-based calibration is presented briefly to contrast it with on-orbit calibration. The technical and economic benefits of on-orbit calibration are discussed. Various algorithms for on-orbit calibration are evaluated, including some that are already operating on board spacecraft. Because Redundant Inertial Measurement Units (RIMUs, which are IMUs that have more than three sense axes) are almost ubiquitous on spacecraft, special attention will be given to calibration of RIMUs. In addition, we discuss autonomous on board calibration and how it may be implemented.

  10. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.

  11. On-Board Perception System For Planetary Aerobot Balloon Navigation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Scheid, Robert E.; T. Salomon, Phil

    1996-01-01

    NASA's Jet Propulsion Laboratory is implementing the Planetary Aerobot Testbed to develop the technology needed to operate a robotic balloon aero-vehicle (Aerobot). This earth-based system would be the precursor for aerobots designed to explore Venus, Mars, Titan and other gaseous planetary bodies. The on-board perception system allows the aerobot to localize itself and navigate on a planet using information derived from a variety of celestial, inertial, ground-imaging, ranging, and radiometric sensors.

  12. Technical feasibility of an ROV with on-board power

    SciTech Connect

    Sayer, P.; Bo, L.

    1994-12-31

    An ROI`s electric power, control and communication signals are supplied from a surface ship or platform through an umbilical cable. Though cable design has evolved steadily, there are still severe limitations such as heavy weight and cost. It is well known that the drag imposed by the cable limits the operational range of the ROV in deep water. On the other hand, a cable-free AUV presents problems in control, communication and transmission of data. Therefore, an ROV with on-board and small-diameter cable could offer both a large operating range (footprint) and real-time control. This paper considers the feasibility of such an ROV with on-board power, namely a Self-Powered ROV (SPROV). The selection of possible power sources is first discussed before comparing the operational performance of an SPROV against a conventional ROV. It is demonstrated how an SPROV with a 5mm diameter tether offers a promising way forward, with on-board power of up to 40 kW over 24 hours. In water depths greater than 50m the reduced drag of the SPROV tether is very advantageous.

  13. Thematic data processing on board the satellite BIRD

    NASA Astrophysics Data System (ADS)

    Halle, Winfried; Venus, Holger; Skrbek, Wolfgang

    2000-11-01

    The general trend in remote sensing is on one hand to increase the number of spectral bands and the geometric resolution of the imaging sensors which leads to higher data rates and data volumes. On the other hand the user is often only interested in special information of the received sensor data and not in the whole data mass. Concerning these two tendencies a main part of the signal pre-processing can already be done for special users and tasks on-board a satellite. For the BIRD (Bispectral InfraRed Detection) mission a new approach of an on-board data processing is made. The main goal of the BIRD mission is the fire recognition and the detection of hot spots. This paper describes the technical solution, of an on-board image data processing system based on the sensor system on two new IR-Sensors and the stereo line scanner WAOSS (Wide-Angle-Optoelectronic-Scanner). The aim of this data processing system is to reduce the data stream from the satellite due to generations of geo-coded thematic maps. This reduction will be made by a multispectral classification. For this classification a special hardware based on the neutral network processor NI1000 was designed. This hardware is integrated in the payload data handling system of the satellite.

  14. Thematic data processing on board the satellite BIRD

    NASA Astrophysics Data System (ADS)

    Halle, Winfried

    2001-12-01

    The general trend in remote sensing is on one hand to increase the number of spectral bands and the geometric resolution of the imaging sensors which leads to higher data rates and data volumes. On the other hand the user is often only interested in special information of the received sensor data and not in the whole data mass. Concerning these two tendencies a main part of the signal pre-processing can already be done for special users and tasks on-board a satellite. For the BIRD (Bispectral InfraRed Detection) mission a new approach of an on-board data processing is made. The main goal of the BIRD mission is the fire recognition and the detection of hot spots. This paper describes the technical solution, of an on-board image data processing system based on the sensor system on two new IR- Sensors and the stereo line scanner WAOSS (Wide-Angle- Optoelectronic-Scanner). The aim of this data processing system is to reduce the data stream from the satellite due to generations of geo-coded thematic maps. This reduction will be made by a multispectral classification. For this classification a special hardware based on the neural network processor NI1000 was designed. This hardware is integrated in the payload data handling system of the satellite.

  15. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  16. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  17. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  18. On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.

    2004-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of

  19. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  20. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  1. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  2. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  3. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  4. iSAFT Protocol Validation Platform for On-Board Data Networks

    NASA Astrophysics Data System (ADS)

    Tavoularis, Antonis; Marinis, Kostas; Kollias, Vangelis

    2014-08-01

    iSAFT is an integrated powerful HW/SW environment for the simulation, validation & monitoring of satellite/spacecraft on-board data networks supporting simultaneously a wide range of protocols (RMAP, PTP, CCSDS Space Packet, TM/TC, CANopen, etc.) and network interfaces (SpaceWire, ECSS MIL-STD-1553, ECSS CAN). It is based on over 20 years of TELETEL's experience in the area of protocol validation in the telecommunications and aeronautical sectors, and it has been fully re-engineered in cooperation of TELETEL with ESA & space Primes, to comply with space on-board industrial validation requirements (ECSS, EGSE, AIT, AIV, etc.). iSAFT is highly modular and expandable to support new network interfaces & protocols and it is based on the powerful iSAFT graphical tool chain (Protocol Analyser /Recorder, TestRunner, Device Simulator, Traffic Generator, etc.). iSAFT can be used for the validation of units used in specific scientific missions, like the GAIA Video Processing Unit, which generate large volumes of data and validation can become very demanding. For these cases both the recording and the simulation exceed the performances of many existing test systems and test equipment is parallelized leading to complex EGSE architectures and generating SW synchronization issues. This paper presents the functional and performance characteristics of two instances of the iSAFT system, the iSAFT Recorder and iSAFT Simulator Traffic Generation engine. The main objective of the work presented in this paper was carried out in the frame of ESTEC Contract no. 4000105444/12/NL/CBI [titled "Protocol Validation System (PVS) activity"] and the results prove that, for both recording and simulation, iSAFT can be trusted even in missions with very high performance requirements.

  5. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  6. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  7. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  8. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  9. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  10. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  11. MODIS On-Board Blackbody Function and Performance

    NASA Technical Reports Server (NTRS)

    Xiaoxiong, Xiong; Wenny, Brian N.; Wu, Aisheng; Barnes, William

    2009-01-01

    Two MODIS instruments are currently in orbit, making continuous global observations in visible to long-wave infrared wavelengths. Compared to heritage sensors, MODIS was built with an advanced set of on-board calibrators, providing sensor radiometric, spectral, and spatial calibration and characterization during on-orbit operation. For the thermal emissive bands (TEB) with wavelengths from 3.7 m to 14.4 m, a v-grooved blackbody (BB) is used as the primary calibration source. The BB temperature is accurately measured each scan (1.47s) using a set of 12 temperature sensors traceable to NIST temperature standards. The onboard BB is nominally operated at a fixed temperature, 290K for Terra MODIS and 285K for Aqua MODIS, to compute the TEB linear calibration coefficients. Periodically, its temperature is varied from 270K (instrument ambient) to 315K in order to evaluate and update the nonlinear calibration coefficients. This paper describes MODIS on-board BB functions with emphasis on on-orbit operation and performance. It examines the BB temperature uncertainties under different operational conditions and their impact on TEB calibration and data product quality. The temperature uniformity of the BB is also evaluated using TEB detector responses at different operating temperatures. On-orbit results demonstrate excellent short-term and long-term stability for both the Terra and Aqua MODIS on-board BB. The on-orbit BB temperature uncertainty is estimated to be 10mK for Terra MODIS at 290K and 5mK for Aqua MODIS at 285K, thus meeting the TEB design specifications. In addition, there has been no measurable BB temperature drift over the entire mission of both Terra and Aqua MODIS.

  12. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  13. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  14. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  15. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  16. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  17. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  18. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  19. Applying CASE Tools for On-Board Software Development

    NASA Astrophysics Data System (ADS)

    Brammer, U.; Hönle, A.

    For many space projects the software development is facing great pressure with respect to quality, costs and schedule. One way to cope with these challenges is the application of CASE tools for automatic generation of code and documentation. This paper describes two CASE tools: Rhapsody (I-Logix) featuring UML and ISG (BSSE) that provides modeling of finite state machines. Both tools have been used at Kayser-Threde in different space projects for the development of on-board software. The tools are discussed with regard to the full software development cycle.

  20. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  1. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  2. High-performance vertical organic transistors.

    PubMed

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. PMID:23637074

  3. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  4. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. High performance stationary phases for planar chromatography.

    PubMed

    Poole, Salwa K; Poole, Colin F

    2011-05-13

    The kinetic performance of stabilized particle layers, particle membranes, and thin films for thin-layer chromatography is reviewed with a focus on how layer characteristics and experimental conditions affect the observed plate height. Forced flow and pressurized planar electrochromatography are identified as the best candidates to overcome the limited performance achieved by capillary flow for stabilized particle layers. For conventional and high performance plates band broadening is dominated by molecular diffusion at low mobile phase velocities typical of capillary flow systems and by mass transfer with a significant contribution from flow anisotropy at higher flow rates typical of forced flow systems. There are few possible changes to the structure of stabilized particle layers that would significantly improve their performance for capillary flow systems while for forced flow a number of avenues for further study are identified. New media for ultra thin-layer chromatography shows encouraging possibilities for miniaturized high performance systems but the realization of their true performance requires improvements in instrumentation for sample application and detection.

  6. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  7. Arteriopathy in the high-performance athlete.

    PubMed

    Takach, Thomas J; Kane, Peter N; Madjarov, Jeko M; Holleman, Jeremiah H; Nussbaum, Tzvi; Robicsek, Francis; Roush, Timothy S

    2006-01-01

    Pain occurs frequently in high-performance athletes and is most often due to musculoskeletal injury or strain. However, athletes who participate in sports that require highly frequent, repetitive limb motion can also experience pain from an underlying arteriopathy, which causes exercise-induced ischemia. We reviewed the clinical records and follow-up care of 3 high-performance athletes (mean age, 29.3 yr; range, 16-47 yr) who were admitted consecutively to our institution from January 2002 through May 2003, each with a diagnosis of limb ischemia due to arteriopathy. The study group comprised 3 males: 2 active in competitive baseball (ages, 16 and 19 yr) and a cyclist (age, 47 yr). Provocative testing and radiologic evaluation established the diagnoses. Treatment goals included targeted resection of compressive structures, arterial reconstruction to eliminate stenosis and possible emboli, and improvement of distal perfusion. Our successful reconstructive techniques included thoracic outlet decompression and interpositional bypass of the subclavian artery in the 16-year-old patient, pectoralis muscle and tendon decompression to relieve compression of the axillary artery in the 19-year-old, and patch angioplasty for endofibrosis affecting the external iliac artery in the 47-year-old. Each patient was asymptomatic on follow-up and had resumed participation in competitive athletics. The recognition and anatomic definition of an arteriopathy that produces exercise-induced ischemia enables the application of precise therapy that can produce a symptom-free outcome and the ability to resume competitive athletics.

  8. On-board data management study for EOPAP

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1975-01-01

    The requirements, implementation techniques, and mission analysis associated with on-board data management for EOPAP were studied. SEASAT-A was used as a baseline, and the storage requirements, data rates, and information extraction requirements were investigated for each of the following proposed SEASAT sensors: a short pulse 13.9 GHz radar, a long pulse 13.9 GHz radar, a synthetic aperture radar, a multispectral passive microwave radiometer facility, and an infrared/visible very high resolution radiometer (VHRR). Rate distortion theory was applied to determine theoretical minimum data rates and compared with the rates required by practical techniques. It was concluded that practical techniques can be used which approach the theoretically optimum based upon an empirically determined source random process model. The results of the preceding investigations were used to recommend an on-board data management system for (1) data compression through information extraction, optimal noiseless coding, source coding with distortion, data buffering, and data selection under command or as a function of data activity, (2) for command handling, (3) for spacecraft operation and control, and (4) for experiment operation and monitoring.

  9. On-board aircrew dosimetry using a semiconductor spectrometer.

    PubMed

    Spurný, F; Dachev, T

    2002-01-01

    Radiation fields on board aircraft contain particles with energies up to a few hundred MeV. Many instruments have been tested to characterise these fields. This paper presents the results of studies on the use of an Si diode spectrometer to characterise these fields. The spectrometer has been in use since spring 2000 on more than 130 return flights to monitor and characterise the on-board field. During a Czech Airlines flight from Prague to New York it was possible to register the effects of an intense solar flare, (ground level event, GLE 60), which occurred on 15 April 2001. It was found that the number of deposition events registered was increased by about 70% and the dose in Si by a factor of 2.0 when compared with the presence of galactic cosmic rays alone. Directly measured data are interpreted with respect to on-earth reference field calibration (photons, CERN high-energy particles): it was found that this approach leads to encouraging results and should be followed up.

  10. Assessing exposure to cosmic radiation on board aircraft.

    PubMed

    Bottollier-Depois, J F; Chau, Q; Bouisset, P; Kerlau, G; Plawinski, L; Lebaron-Jacobs, L

    2003-01-01

    The assessment of exposure to cosmic radiation on board aircraft is one of the preoccupations of organizations responsible for radiation protection. The cosmic radiation particle flux increases with altitude and latitude and depends on the solar activity. The radiation exposure has been estimated on several airlines using transatlantic, Siberian and transequatorial routes on board subsonic and supersonic aircraft, to illustrate the effect of these parameters. Measurements have been obtained with a tissue equivalent proportional counter using the microdosimetric technique. Data have been collected at maximum solar activity in 1991-92 and at minimum in 1996-98. The lowest mean dose rate measured was 3 microSv/h during a Paris-Buenos Aires flight in 1991; the highest was 6.6 microSv/h during a Paris-Tokyo flight using a Siberian route and 9.7 microSv/h on Concorde in 1996-97. The mean quality factor is around 1.8. The corresponding annual effective dose, based on 700 hours of flight for subsonic aircraft and 300 hours for Concorde, can be estimated between 2 mSv for least-exposed routes and 5 mSv for more exposed routes.

  11. Expert system for on-board satellite scheduling and control

    NASA Technical Reports Server (NTRS)

    Barry, John M.; Sary, Charisse

    1988-01-01

    An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.

  12. Rotating pressure measurement system using an on board calibration standard

    NASA Technical Reports Server (NTRS)

    Senyitko, Richard G.; Blumenthal, Philip Z.; Freedman, Robert J.

    1991-01-01

    A computer-controlled multichannel pressure measurement system was developed to acquire detailed flow field measurements on board the Large Low Speed Centrifugal Compressor Research Facility at the NASA Lewis Research Center. A pneumatic slip ring seal assembly is used to transfer calibration pressures to a reference standard transducer on board the compressor rotor in order to measure very low differential pressures with the high accuracy required. A unique data acquisition system was designed and built to convert the analog signal from the reference transducer to the variable frequency required by the multichannel pressure measurement system and also to provide an output for temperature control of the reference transducer. The system also monitors changes in test cell barometric pressure and rotating seal leakage and provides an on screen warning to the operator if limits are exceeded. The methods used for the selection and testing of the the reference transducer are discussed, and the data acquisition system hardware and software design are described. The calculated and experimental data for the system measurement accuracy are also presented.

  13. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  14. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  15. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  16. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  17. [High-performance society and doping].

    PubMed

    Gallien, C L

    2002-09-01

    Doping is not limited to high-level athletes. Likewise it is not limited to the field of sports activities. The doping phenomenon observed in sports actually reveals an underlying question concerning the notion of sports itself, and more widely, the society's conception of sports. In a high-performance society, which is also a high-risk society, doping behavior is observed in a large number of persons who may or may not participate in sports activities. The motivation is the search for individual success or profit. The fight against doping must therefore focus on individual responsibility and prevention in order to preserve athlete's health and maintain the ethical and educational value of sports activities.

  18. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  19. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  20. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  1. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  2. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  3. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  4. On-board Payload Data Processing from Earth to Space Segment

    NASA Astrophysics Data System (ADS)

    Tragni, M.; Abbattista, C.; Amoruso, L.; Cinquepalmi, L.; Bgongiari, F.; Errico, W.

    2013-09-01

    GS algorithms to approach the problem in the Space scenario, i.e. for Synthetic Aperture Radar (SAR) application the typical focalization of the raw image needs to be improved to be effectively in this context. Many works are actually available on that, the authors have developed a specific ones for neural network algorithms. By the information directly "acquired" (so computed) on-board and without intervention of typical ground systems facilities, the spacecraft can take autonomously decision regarding a re-planning of acquisition for itself (at high performance modalities) or other platforms in constellation or affiliated reducing the time elapse as in the nowadays approach. For no EO missions it is big advantage to reduce the large round trip flight of transmission. In general the saving of resources is extensible to memory and RF transmission band resources, time reaction (like civil protection applications), etc. enlarging the flexibility of missions and improving the final results. SpacePDP main HW and SW characteristics: • Compactness: size and weight of each module are fitted in a Eurocard 3U 8HP format with «Inter-Board» connection through cPCI peripheral bus. • Modularity: the Payload is usually composed by several sub-systems. • Flexibility: coprocessor FPGA, on-board memory and support avionic protocols are flexible, allowing different modules customization according to mission needs • Completeness: the two core boards (CPU and Companion) are enough to obtain a first complete payload data processing system in a basic configuration. • Integrability: The payload data processing system is open to accept custom modules to be connected on its open peripheral bus. • CPU HW module (one or more) based on a RISC processor (LEON2FT, a SPARC V8 architecture, 80Mips @100MHz on ASIC ATMEL AT697F) • DSP HW module (optional with more instances) based on a FPGA dedicated architecture to ensure an effective multitasking control and to offer high numerical

  5. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  6. Creating high performance buildings: Lower energy, better comfort

    NASA Astrophysics Data System (ADS)

    Brager, Gail; Arens, Edward

    2015-03-01

    Buildings play a critical role in the challenge of mitigating and adapting to climate change. It is estimated that buildings contribute 39% of the total U.S. greenhouse gas (GHG) emissions [1] primarily due to their operational energy use, and about 80% of this building energy use is for heating, cooling, ventilating, and lighting. An important premise of this paper is about the connection between energy and comfort. They are inseparable when one talks about high performance buildings. Worldwide data suggests that we are significantly overcooling buildings in the summer, resulting in increased energy use and problems with thermal comfort. In contrast, in naturally ventilated buildings without mechanical cooling, people are comfortable in much warmer temperatures due to shifting expectations and preferences as a result of occupants having a greater degree of personal control over their thermal environment; they have also become more accustomed to variable conditions that closely reflect the natural rhythms of outdoor climate patterns. This has resulted in an adaptive comfort zone that offers significant potential for encouraging naturally ventilated buildings to improve both energy use and comfort. Research on other forms for providing individualized control through low-energy personal comfort systems (desktop fans, foot warmed, and heated and cooled chairs) have also demonstrated enormous potential for improving both energy and comfort performance. Studies have demonstrated high levels of comfort with these systems while ambient temperatures ranged from 64-84°F. Energy and indoor environmental quality are inextricably linked, and must both be important goals of a high performance building.

  7. Creating high performance buildings: Lower energy, better comfort

    SciTech Connect

    Brager, Gail; Arens, Edward

    2015-03-30

    Buildings play a critical role in the challenge of mitigating and adapting to climate change. It is estimated that buildings contribute 39% of the total U.S. greenhouse gas (GHG) emissions [1] primarily due to their operational energy use, and about 80% of this building energy use is for heating, cooling, ventilating, and lighting. An important premise of this paper is about the connection between energy and comfort. They are inseparable when one talks about high performance buildings. Worldwide data suggests that we are significantly overcooling buildings in the summer, resulting in increased energy use and problems with thermal comfort. In contrast, in naturally ventilated buildings without mechanical cooling, people are comfortable in much warmer temperatures due to shifting expectations and preferences as a result of occupants having a greater degree of personal control over their thermal environment; they have also become more accustomed to variable conditions that closely reflect the natural rhythms of outdoor climate patterns. This has resulted in an adaptive comfort zone that offers significant potential for encouraging naturally ventilated buildings to improve both energy use and comfort. Research on other forms for providing individualized control through low-energy personal comfort systems (desktop fans, foot warmed, and heated and cooled chairs) have also demonstrated enormous potential for improving both energy and comfort performance. Studies have demonstrated high levels of comfort with these systems while ambient temperatures ranged from 64–84°F. Energy and indoor environmental quality are inextricably linked, and must both be important goals of a high performance building.

  8. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  9. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  10. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  11. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  12. Status of UHECR detector KLYPVE on-board the ISS

    NASA Astrophysics Data System (ADS)

    Klimov, Pavel; Garipov, Gali; Khrenov, Boris; Yashin, Ivan; Panasyuk, Mikhail; Tkachev, Leonid; Sharakin, Sergey; Zotov, Mikhail; Churilo, Igor; Markov, Alexander

    A preliminary project of the KLYPVE detector of ultra high energy cosmic rays (UHECR) on board the International Space Station (ISS) was developed in Lomonosov Moscow State University Skobeltsyn Institute of Nuclear Physics in cooperation with RSC “Energia”. The main scientific aims of the mission are measurements of the primary particles energy spectrum, their arrival directions and a search for large and small scale anisotropy (including point sources) in the energy region above the Greisen-Zatsepin-Kuzmin cut-off. Various types of optical systems, photo detectors, mechanical structures and multiple issues related to transportation and accommodation on the Russian Segment of the ISS were considered. Recent development of KLYPVE is made in close cooperation with the JEM-EUSO collaboration in order to improve the detector parameters such as field of view, angular and energy resolution, energy threshold. Current status of the project is presented in the report.

  13. On-board attitude determination for the Topex satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Ha, K.; Welch, R. V.; Kia, T.

    1989-01-01

    This paper presents an overall technical description of the on-board attitude determination system for The Ocean Topography Experiment (Topex) satellite. The stellar-inertial attitude determination system being designed for the Topex satellite utilizes data from a three-axis NASA Standard DRIRU-II as well as data from an Advanced Star Tracer (ASTRA) and a Digital Fine Sun Sensor (DFSS). This system is a modified version of the baseline Multimission Modular Spacecraft (MMS) concept used on the Landsat missions. Extensive simulation and analysis of the MMS attitude determination approach was performed to verify suitability for the Topex application. The modifications to this baseline attitude determination scheme were identified to satisfy the unique Topex mission requirements.

  14. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  15. Monitoring on board spacecraft by means of passive detectors.

    PubMed

    Ambrožová, I; Brabcová, K; Spurný, F; Shurshakov, V A; Kartsev, I S; Tolochek, R V

    2011-03-01

    To estimate the radiation risk of astronauts during space missions, it is necessary to measure dose characteristics in various compartments of the spacecraft; this knowledge can be further used for estimating the health hazard in planned missions. This contribution presents results obtained during several missions on board the International Space Station (ISS) during 2005-09. A combination of thermoluminescent and plastic nuclear track detectors was used to measure the absorbed dose and dose equivalent. These passive detectors have several advantages, especially small dimensions, which enabled their placement at various locations in different compartments inside the ISS or inside the phantom. Variation of dosimetric quantities with the phase of the solar cycle and the position inside the ISS is discussed. PMID:20959332

  16. Overview of on-board measurements during solar storm periods.

    PubMed

    Beck, P; Dyer, C; Fuller, N; Hands, A; Latocha, M; Rollet, S; Spurný, F

    2009-10-01

    Radiation exposure of aircraft crew caused by cosmic radiation is regulated in Europe by the European Community Council Directive 96/29/EURATOM and implemented into law in almost every country of the European Union. While the galactic cosmic radiation (GCR) leads on average to an exposure of about 3 mSv per year, solar cosmic radiation can lead to 1 mSv per one subsonic flight during solar storm periods. Compared to GCR, solar cosmic radiation shows a much softer proton spectrum but with a larger contribution of several orders of magnitude. This is the reason for the large radiation exposure in high northern and southern geographic latitudes during solar particle events. Here an overview of active radiation in-flight measurements undertaken during solar storms is given. In particular, tissue-equivalent proportional counter on-board measurements are shown and the radiation quality during solar storm periods with that for GCR is compared.

  17. DAMPE silicon tracker on-board data compression algorithm

    NASA Astrophysics Data System (ADS)

    Dong, Yi-Fan; Zhang, Fei; Qiao, Rui; Peng, Wen-Xi; Fan, Rui-Rui; Gong, Ke; Wu, Di; Wang, Huan-Yu

    2015-11-01

    The Dark Matter Particle Explorer (DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic ray detection. The silicon tracker (STK) is a subdetector of the DAMPE payload. It has excellent position resolution (readout pitch of 242 μm), and measures the incident direction of particles as well as charge. The STK consists of 12 layers of Silicon Micro-strip Detector (SMD), equivalent to a total silicon area of 6.5 m2. The total number of readout channels of the STK is 73728, which leads to a huge amount of raw data to be processed. In this paper, we focus on the on-board data compression algorithm and procedure in the STK, and show the results of initial verification by cosmic-ray measurements. Supported by Strategic Priority Research Program on Space Science of Chinese Academy of Sciences (XDA040402) and National Natural Science Foundation of China (1111403027)

  18. Neutral atmosphere composition from SOIR measurements on board Venus Express

    NASA Astrophysics Data System (ADS)

    Mahieux, A.; Drummond, R.; Wilquet, V.; Vandaele, A. C.; Federova, A.; Belyaev, D.; Korablev, O.; Villard, E.; Montmessin, F.; Bertaux, J.-L.

    2009-04-01

    The SOIR instrument performs solar occultation measurements in the IR region (2.2 - 4.3 m) at a resolution of 0.12 cm-1, the highest on board Venus Express. It combines an echelle spectrometer and an AOTF (Acousto-Optical Tunable Filter) for the order selection [1,2]. The wavelength range probed by SOIR allows a detailed chemical inventory of the Venus atmosphere above the cloud layer with an emphasis on vertical distribution of the gases. Measurements of HDO, H2O, HCl, HF, CO and CO2 vertical profiles have been routinely performed, as well as those of their isotopologues [3,4]. We will discuss the improvements introduced in the analysis algorithm of the SOIR spectra. This discussion will be illustrated by presenting new results of retrievals of minor constituents of the Venus mesosphere, in terms of vertical profiles and geographical distribution. CO2 is the major constituent of the Venus atmosphere and was therefore observed in many solar occultations, leading to a good geographical coverage, although limited by the geometry of the orbit. Depending on the abundance of the absorbing isotopologue and on the intensity of the band measured, we will show that the SOIR instrument is able to furnish CO2 vertical profiles ranging typically from 65 to 150 km, reaching in some conditions 185 km altitude. This information is important in the frame of compiling, in collaboration with other teams, a new Venus Atmosphere Model. 1. A. Mahieux, S. Berkenbosch, R. Clairquin, D. Fussen, N. Mateshvili, E. Neefs, D. Nevejans, B. Ristic, A. C. Vandaele, V. Wilquet, D. Belyaev, A. Fedorova, O. Korablev, E. Villard, F. Montmessin and J.-L. Bertaux, "In-Flight performance and calibration of SPICAV SOIR on board Venus Express", Applied Optics 47 (13), 2252-65 (2008). 2. D. Nevejans, E. Neefs, E. Van Ransbeeck, S. Berkenbosch, R. Clairquin, L. De Vos, W. Moelans, S. Glorieux, A. Baeke, O. Korablev, I. Vinogradov, Y. Kalinnikov, B. Bach, J.-P. Dubois and E. Villard, "Compact high

  19. Thermal Stability of the AVIRIS On-Board Calibrator

    NASA Technical Reports Server (NTRS)

    Faust, Jessica; Eastwood, Michael; Sarture, Chuck; Williams, Orlesa

    1998-01-01

    The AVIRIS On-Board Calibrator (OBC) provides essential data for refining the calibration of each AVIRIS data run. Annual improvement to the AVIRIS sensor and laboratory calibration accuracy has resulted in increasingly high demands on the stability of the OBC. Since the 1995 flight season, the OBC could track the stability of the spectrometer alignment to the 2% level, a significant improvement over previous years. The major contributor to this 2% stability was the conversion from a constant-current bulb power supply to an intensity-based active feedback power supply. Given the high sensor signal-to-noise ratio, improving the OBC to track 1% or 0.5% changes was highly desirable. Achieving stability better than 2% required an examination of the mechanisms affecting stability.

  20. Challenge of lightning detection with LAC on board Akatsuki spacecraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Yukihiro; Sato, Mitsutero; Imai, Masataka; Yair, Yoav; Fischer, Georg; Aplin, Karen

    2016-04-01

    Even after extensive investigations with spacecraft and ground-based observations, there is still no consensus on the existence of lightning in Venus. It has been reported that the magnetometer on board Venus Express detected whistler mode waves whose source could be lightning discharge occurring well below the spacecraft. On the other hand, with an infrared sensor, VIRTIS of Venus Express, does not show the positive indication of lightning flashes. In order to identify the optical flashes caused by electrical discharge in the atmosphere of Venus, at least, with an optical intensity of 1/10 of the average lightning in the Earth, we built a high-speed optical detector, LAC (Lightning and Airglow Camera), on board Akatsuki spacecraft. The unique performance of the LAC compared to other instruments is the high-speed sampling rate at 32 us interval for all 32 pixels, enabling us to distinguish the optical lightning flash from other pulsing noises. Though, unfortunately, the first attempt of the insertion of Akatsuki into the orbit around Venus failed in December 2010, the second one carried out in December 7 in 2015 was quite successful. We checked out the condition of the LAC on January 5, 2016, and it is healthy as in 2010. Due to some elongated orbit than that planned originally, we have umbra for ~30 min to observe the lightning flash in the night side of Venus every ~10 days, starting on April 2016. Here we would report the instrumental status of LAC and the preliminary results of the first attempt to observe optical lightning emissions.

  1. Corporate sponsored education initiatives on board the ISS

    NASA Astrophysics Data System (ADS)

    Durham, Ian T.; Durham, Alyson S.; Pawelczyk, James A.; Brod, Lawrence B.; Durham, Thomas F.

    1999-01-01

    This paper proposes the creation of a corporate sponsored ``Lecture from Space'' program on board the International Space Station (ISS) with funding coming from a host of new technology and marketing spin-offs. This program would meld existing education initiatives in NASA with new corporate marketing techniques. Astronauts in residence on board the ISS would conduct short ten to fifteen minute live presentations and/or conduct interactive discussions carried out by a teacher in the classroom. This concept is similar to a program already carried out during the Neurolab mission on Shuttle flight STS-90. Building on that concept, the interactive simulcasts would be broadcast over the Internet and linked directly to computers and televisions in classrooms worldwide. In addition to the live broadcasts, educational programs and demonstrations can be recorded in space, and marketed and sold for inclusion in television programs, computer software, and other forms of media. Programs can be distributed directly into classrooms as an additional presentation supplement, as well as over the Internet or through cable and broadcast television, similar to the Canadian Discovery Channel's broadcasts of the Neurolab mission. Successful marketing and advertisement can eventually lead to the creation of an entirely new, privately run cottage industry involving the distribution and sale of educationally related material associated with the ISS that would have the potential to become truly global in scope. By targeting areas of expertise and research interest in microgravity, a large curriculum could be developed using space exploration as a unifying theme. Expansion of this concept could enhance objectives already initiated through the International Space University to include elementary and secondary school students. The ultimate goal would be to stimulate interest in space and space related sciences in today's youth through creative educational marketing initiatives while at the

  2. High-Speed On-Board Data Processing for Science Instruments: HOPS

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  3. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  4. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  5. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  6. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  7. High performance vapour-cell frequency standards

    NASA Astrophysics Data System (ADS)

    Gharavipour, M.; Affolderbach, C.; Kang, S.; Bandi, T.; Gruet, F.; Pellaton, M.; Mileti, G.

    2016-06-01

    We report our investigations on a compact high-performance rubidium (Rb) vapour-cell clock based on microwave-optical double-resonance (DR). These studies are done in both DR continuous-wave (CW) and Ramsey schemes using the same Physics Package (PP), with the same Rb vapour cell and a magnetron-type cavity with only 45 cm3 external volume. In the CW-DR scheme, we demonstrate a DR signal with a contrast of 26% and a linewidth of 334 Hz; in Ramsey-DR mode Ramsey signals with higher contrast up to 35% and a linewidth of 160 Hz have been demonstrated. Short-term stabilities of 1.4×10-13 τ-1/2 and 2.4×10-13 τ-1/2 are measured for CW-DR and Ramsey-DR schemes, respectively. In the Ramsey-DR operation, thanks to the separation of light and microwave interactions in time, the light-shift effect has been suppressed which allows improving the long-term clock stability as compared to CW-DR operation. Implementations in miniature atomic clocks are considered.

  8. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  9. Low-Cost High-Performance MRI.

    PubMed

    Sarracanie, Mathieu; LaPierre, Cristen D; Salameh, Najat; Waddington, David E J; Witzel, Thomas; Rosen, Matthew S

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm(3) imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  10. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  11. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  12. Towards high performance inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Gong, Xiong

    2013-03-01

    Bulk heterojunction polymer solar cells that can be fabricated by solution processing techniques are under intense investigation in both academic institutions and industrial companies because of their potential to enable mass production of flexible and cost-effective alternative to silicon-based electronics. Despite the envisioned advantages and recent technology advances, so far the performance of polymer solar cells is still inferior to inorganic counterparts in terms of the efficiency and stability. There are many factors limiting the performance of polymer solar cells. Among them, the optical and electronic properties of materials in the active layer, device architecture and elimination of PEDOT:PSS are the most determining factors in the overall performance of polymer solar cells. In this presentation, I will present how we approach high performance of polymer solar cells. For example, by developing novel materials, fabrication polymer photovoltaic cells with an inverted device structure and elimination of PEDOT:PSS, we were able to observe over 8.4% power conversion efficiency from inverted polymer solar cells.

  13. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  14. An integrated high performance Fastbus slave interface

    SciTech Connect

    Christiansen, J.; Ljuslin, C. )

    1993-08-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip.

  15. High performance composites with active stiffness control.

    PubMed

    Tridech, Charnwit; Maples, Henry A; Robinson, Paul; Bismarck, Alexander

    2013-09-25

    High performance carbon fiber reinforced composites with controllable stiffness could revolutionize the use of composite materials in structural applications. Here we describe a structural material, which has a stiffness that can be actively controlled on demand. Such a material could have applications in morphing wings or deployable structures. A carbon fiber reinforced-epoxy composite is described that can undergo an 88% reduction in flexural stiffness at elevated temperatures and fully recover when cooled, with no discernible damage or loss in properties. Once the stiffness has been reduced, the required deformations can be achieved at much lower actuation forces. For this proof-of-concept study a thin polyacrylamide (PAAm) layer was electrocoated onto carbon fibers that were then embedded into an epoxy matrix via resin infusion. Heating the PAAm coating above its glass transition temperature caused it to soften and allowed the fibers to slide within the matrix. To produce the stiffness change the carbon fibers were used as resistance heating elements by passing a current through them. When the PAAm coating had softened, the ability of the interphase to transfer load to the fibers was significantly reduced, greatly lowering the flexural stiffness of the composite. By changing the moisture content in PAAm fiber coating, the temperature at which the PAAm softens and the composites undergo a reduction in stiffness can be tuned. PMID:23978266

  16. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  17. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  18. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  19. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  20. Experience with high-performance PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Goldburgh, Mitchell M.; Head, Calvin

    1997-05-01

    Lockheed Martin (Loral) has installed PACS with associated teleradiology in several tens of hospitals. The PACS that have been installed have been the basis for a shift to filmless radiology in many of the hospitals. the basic structure for the PACS and the teleradiology that is being used is outlined. The way that the PACS are being used in the hospitals is instructive. The three most used areas for radiology in the hospital are the wards including the ICU wards, the emergency room, and the orthopedics clinic. The examinations are mostly CR images with 20 percent to 30 percent of the examinations being CT, MR, and ultrasound exams. The PACS are being used to realize improved productivity for radiology and for the clinicians. For radiology the same staff is being used for 30 to 50 percent more workload. For the clinicians 10 to 20 percent of their time is being saved in dealing with radiology images. The improved productivity stems from the high performance of the PACS that has been designed and installed. Images are available on any workstation in the hospital within less than two seconds, even during the busiest hour of the day. The examination management functions to restrict the attention of any one user to the examinations that are of interest. The examination management organizes the workflow through the radiology department and the hospital, improving the service of the radiology department by reducing the time until the information from a radiology examination is available. The remaining weak link in the PACS system is transcription. The examination can be acquired, read, an the report dictated in much less than ten minutes. The transcription of the dictated reports can take from a few hours to a few days. The addition of automatic transcription services will remove this weak link.

  1. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  2. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  3. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  4. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  5. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  6. Hindering Factors of Beginning Teachers' High Performance in Higher Education Pakistan: Case Study of IUB

    ERIC Educational Resources Information Center

    Sarwar, Shakeel; Aslam, Hassan Danyal; Rasheed, Muhammad Imran

    2012-01-01

    Purpose: The aim of the researchers in this endeavor is to identify the challenges and obstacles faced by beginning teachers in higher education. This study also explores practical implications and what adaptation can be utilized in order to have high performance of the beginning teachers. Design/methodology/approach: Researchers have applied…

  7. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  8. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  9. On-board Science Understanding: NASA Ames' Efforts

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Cheeseman, Peter; Gulick, Virginia; Wolf, David; Gazis, Paul; Benedix, Gretchen; Buntine, Wray; Glymour, Clark; Pedersen, Liam; Ruzon, Mark

    1998-01-01

    In the near future NASA intends to explore various regions of our solar system using robotic devices such as rovers, spacecraft, airplanes, and/or balloons. Such platforms will likely carry imaging devices, and a variety of analytical instruments intended to evaluate the chemical and mineralogical nature of the environment(s) that they encounter. Historically, mission operations have involved: (1) return of scientific data from the craft; (2) evaluation of the data by space scientists; (3) recommendations of the scientists regarding future mission activity; (4) commands for achieving these activities being transmitted to the craft; and (5) the activity being undertaken. This cycle is then repeated for the duration of the mission with command opportunities once or perhaps twice per day. In a rapidly changing environment, such as might be encountered by a rover traversing hundreds of meters a day or a spacecraft encountering an asteroid, this historical cycle is not amenable to rapid long range traverses, discovery of novelty, or rapid response to any unexpected situations. In addition to real-time response issues, the nature of imaging and/or spectroscopic devices are such that tremendous data volumes can be acquired, for example during a traverse. However, such data volumes can rapidly exceed on-board memory capabilities prior to the ability to transmit it to Earth. Additionally, the necessary communication band-widths are restrictive enough so that only a small portion of these data can actually be returned to Earth. Such scenarios clearly require the enabling of some crucial decisions to be made on-board by these robotic explorers. These decisions transcend the electromechanical control, health, and navigation issues associated with robotic operations. Instead they focus upon a long term goal of automating scientific discovery based upon data returned by sensors of the robot craft. Such an approach would eventually enable it to understand what is interesting

  10. High-Performance, Space-Storable, Bi-Propellant Program Status

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    2002-01-01

    Bipropellant propulsion systems currently represent the largest bus subsystem for many missions. These missions range from low Earth orbit satellite to geosynchronous communications and planetary exploration. The payoff of high performance bipropellant systems is illustrated by the fact that Aerojet Redmond has qualified a commercial NTO/MMH engine based on the high Isp technology recently delivered by this program. They are now qualifying a NTO/hydrazine version of this engine. The advanced rhenium thrust chambers recently provided by this program have raised the performance of earth storable propellants from 315 sec to 328 sec of specific impulse. The recently introduced rhenium technology is the first new technology introduced to satellite propulsion in 30 years. Typically, the lead time required to develop and qualify new chemical thruster technology is not compatible with program development schedules. These technology development programs must be supported by a long term, Base R&T Program, if the technology s to be matured. This technology program then addresses the need for high performance, storable, on-board chemical propulsion for planetary rendezvous and descent/ascent. The primary NASA customer for this technology is Space Science, which identifies this need for such programs as Mars Surface Return, Titan Explorer, Neptune Orbiter, and Europa Lander. High performance (390 sec) chemical propulsion is estimated to add 105% payload to the Mars Sample Return mission or alternatively reduce the launch mass by 33%. In many cases, the use of existing (flight heritage) propellant technology is accommodated by reducing mission objectives and/or increasing enroute travel times sacrificing the science value per unit cost of the program. Therefore, a high performance storable thruster utilizing fluorinated oxidizers with hydrazine is being developed.

  11. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  12. A low-cost on-board vehicle load monitor

    NASA Astrophysics Data System (ADS)

    Lacquet, Beatrys M.; Swart, Pieter L.; Kotzé, Abraham P.

    1996-12-01

    We propose the use of etched optical fibre strain sensors to provide an economical on-board load indicator for minibuses and heavy vehicles. By improving the fabrication process we produced symmetrically etched fibre strain gauges. Manufactured sensors were evaluated experimentally by straining them on a cantilever beam. For strains smaller than 600 microstrain the output of a ten-segment sensor was linear with a typical gauge factor of -57. Bending losses in the fibre sensor became more pronounced for larger strains. This sensor has only two optical components apart from the sensing element. Strain sensors were mounted on the rear axle and on the front torsion bar of a minibus taxi test vehicle. Proper weighting of the outputs of the front and back sensors on the vehicle ensures a monotonic relationship between the sensor output and load. In addition, the reading of the sensor system is virtually independent of the load distribution in the vehicle. Difference-over-sum processing ensures insensitivity to common-mode perturbations such as temperature and source intensity changes.

  13. The ALTCRISS Project On Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Casolino, M.; Altamura, F.; Minori, M.; Picozza, P.; Fuglesang, C.; Galper, A.; Popov, A.; Benghin, V.; Petrov, V. M.

    2006-01-01

    The Altcriss project aims to perform a long term survey of the radiation environment on board the International Space Station. Measurements are being performed with active and passive devices in different locations and orientations of the Russian segment of the station. The goal is perform a detailed evaluation of the differences in particle fluence and nuclear composition due to different shielding material and attitude of the station. The Sileye-3/Alteino detector is used to identify nuclei up to Iron in the energy range above approximately equal to 60 MeV/n; a number of passive dosimeters (TLDs, CR39) are also placed in the same location of Sileye-3 detector. Polyethylene shielding is periodically interposed in front of the detectors to evaluate the effectiveness of shielding on the nuclear component of the cosmic radiation. The project was submitted to ESA in reply to the AO the Life and Physical Science of 2004 and was begun in December 2005. Dosimeters and data cards are rotated every six months: up to now three launches of dosimeters and data cards have been performed and have been returned with the end expedition 12 and 13.

  14. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  15. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  16. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  17. Application of advanced on-board processing concepts to future satellite communications systems

    NASA Technical Reports Server (NTRS)

    Katz, J. L.; Hoffman, M.; Kota, S. L.; Ruddy, J. M.; White, B. F.

    1979-01-01

    An initial definition of on-board processing requirements for an advanced satellite communications system to service domestic markets in the 1990's is presented. An exemplar system architecture with both RF on-board switching and demodulation/remodulation baseband processing was used to identify important issues related to system implementation, cost, and technology development.

  18. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... recording system; and (2) A supply of blank driver's records of duty status graph-grids sufficient to record... driver to use an automatic on-board recording device to record the driver's hours of service in lieu of... an automatic on-board recording device shall use such device to record the driver's hours of...

  19. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  20. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  1. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  2. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  3. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  4. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  5. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  6. 77 FR 63217 - Use of Additional Portable Oxygen Concentrators on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-16

    ...) entitled, ``Use of Certain Portable Oxygen Concentrator Devices Onboard Aircraft'' (70 FR 40156). SFAR 106... portable oxygen concentrator devices on board aircraft (69 FR 42324). Then, on July 12, 2005, after...) entitled, ``Use of Certain Portable Oxygen Concentrator Devices on Board Aircraft.'' (70 FR...

  7. High performance ultrasonic field simulation on complex geometries

    NASA Astrophysics Data System (ADS)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  8. High-mileage study of on-board diagnostic emissions.

    PubMed

    Gardetto, Ed; Bagian, Tandi; Lindner, Jim

    2005-10-01

    The 1990 Clean Air Act amendments require the U.S. Environmental Protection Agency (EPA) to set guidelines for states to follow in designing and running vehicle inspection and maintenance (I/M) programs. Included in this charge was a requirement to implement an on-board diagnostic (OBD) test for both basic and enhanced I/M programs. This paper provides the results to date of an ongoing EPA study undertaken to assess the durability of the OBD system as vehicles age and as mileage is accrued. The primary results of this effort indicate the points described below. First, the majority of high-mileage vehicles tested had emission levels within their certification limits, and their malfunction indicator light (MIL) was not illuminated, indicating that the systems are capable of working throughout the life of a vehicle. Second, OBD provides better air quality benefits than an IM240 test (using the federal test procedure [FTP] as the benchmark comparison). This statement is based on greater emissions reductions from OBD-directed repairs than reductions associated with IM240-identified repairs. In general, the benefits of repairing the OBD fails were smaller, but the aggregate benefits were greater, indicating that OBD tests find both the high-emitting and a number of marginally high-emitting vehicles without false failures that can occur with any tailpipe test. Third, vehicles that truly had high-tailpipe emissions as confirmed by laboratory IM240 and FTP testing also had illuminated MILs at a statistically significant level. Last, field data from state programs have demonstrated MIL illumination rates comparable with those seen in this work, suggesting that the vehicles sampled in this study were representative of the larger fleet. Nonetheless, it is important to continue the testing of high-mileage OBD vehicles into the foreseeable future to ensure that the systems are operating correctly as the fleet ages and as changes in emission certification levels take effect.

  9. Robonaut 2 - Initial Activities On-Board the ISS

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Greene, B. D.; Joyce, Charles; De La Pena, Noe; Noblitt, Alan; Ambrose, Robert

    2011-01-01

    Robonaut 2, or R2, arrived on the International Space Station in February 2011 and is currently undergoing testing in preparation for it to become, initially, an Intra-Vehicular Activity (IVA) tool and then evolve into a system that can perform Extra-Vehicular Activities (EVA). After the completion of a series of system level checks to ensure that the robot traveled well on-board the Space Shuttle Atlantis, ground control personnel will remotely control the robot to perform free space tasks that will help characterize the differences between earth and zero-g control. For approximately one year, the fixed base R2 will perform a variety of experiments using a reconfigurable task board that was launched with the robot. While working side-by-side with human astronauts, Robonaut 2 will actuate switches, use standard tools, and manipulate Space Station interfaces, soft goods and cables. The results of these experiments will demonstrate the wide range of tasks a dexterous humanoid can perform in space and they will help refine the methodologies used to control dexterous robots both in space and here on earth. After the trial period that will evaluate R2 while on a fixed stanchion in the US Laboratory module, NASA plans to launch climbing legs that when attached to the current on-orbit R2 upper body will give the robot the ability to traverse through the Space Station and start assisting crew with general IVA maintenance activities. Multiple control modes will be evaluated in this extra-ordinary ISS test environment to prepare the robot for use during EVAs. Ground Controllers will remotely supervise the robot as it executes semi-autonomous scripts for climbing through the Space Station and interacting with IVA interfaces. IVA crew will locally supervise the robot using the same scripts and also teleoperate the robot to simulate scenarios with the robot working alone or as an assistant during space walks.

  10. Fatigue stress detection of VIRTIS cryocoolers on board Rosetta

    NASA Astrophysics Data System (ADS)

    Giuppi, Stefano; Politi, Romolo; Capria, Maria Teresa; Piccioni, Giuseppe; De Sanctis, Maria Cristina; Erard, Stéphane; Tosi, Federico; Capaccioni, Fabrizio; Filacchione, Gianrico

    Rosetta is a planetary cornerstone mission of the European Space Agency (ESA). It is devoted to the study of minor bodies of our solar system and it will be the first mission ever to land on a comet (the Jupiter-family comet 67P/Churyumov-Gerasimenko). VIRTIS-M is a sophisticated imaging spectrometer that combines two data channels in one compact instrument, respectively for the visible and the infrared range (0.25-5.0 μm). VIRTIS-H is devoted to infrared spectroscopy (2.5-5.0 μm) with high spectral resolution. Since the satellite will be inside the tail of the comet during one of the most important phases of the mission, it would not be appropriate to use a passive cooling system, due to the high flux of contaminants on the radiator. Therefore the IR sensors are cooled by two Stirling cycle cryocoolers produced by RICOR. Since RICOR operated life tests only on ground, it was decided to conduct an analysis on VIRTIS onboard Rosetta telemetries with the purpose of study possible differences in the cryocooler performancies. The analysis led to the conclusion that cryocoolers, when operating on board, are subject to a fatigue stress not present in the on ground life tests. The telemetries analysis shows a cyclic variation in cryocooler rotor angular velocity when -M or -H or both channel are operating (it has been also noted an influence of -M channel operations in -H cryocooler rotor angular velocity and vice versa) with frequencies mostly linked to operational parameters values. The frequencies have been calculated for each mission observation applying the Fast Fourier Transform (FFT). In order to evaluate possible hedge effects it has been also applied the Hanning window to compare the results. For a more complete evaluation of cryocoolers fatigue stress, for each mission observation the angular acceleration and the angular jerk have been calculated.

  11. Characterization of different types of high-performance THUNDER actuators

    NASA Astrophysics Data System (ADS)

    Mossi, Karla M.; Bishop, Richard P.

    1999-07-01

    THUNDER technology introduces a versatile new family of rugged, robust, reliable piezoelectric actuators and sensors. Because of their pre-stressed composite structure, these powerful yet lightweight devices exhibit unprecedented performance in a durable, solid state package. Both sensors and actuators can be manufactured in a variety of adaptable geometries - squares, rectangles and disks - from several millimeters to many centimeters in size. Wide bandwidth performance can be achieved and maintained even in harsh chemical and temperature environments. Based on an invention patented by NASA, THUNDER is an emerging, enabling technology that holds the promise of significant advancements in numerous 'smart' applications. Development of these applications for smart materials and structures requires extensive characterization of a variety of THUNDER devices in a range of configurations. This comprehensive characterization effort is especially challenging because of the extraordinary flexibility and range of motion demonstrated by THUNDER devices, even under significant load. This paper will discuss important new work in the ongoing program of THUNDER device characterization. The program includes not only development of the characterization process, but also design and manufacture of the test and measurement equipment necessary to conduct meaningful and reliable testing on these unique, high performance devices. Results will be presented on characterization of two configurations of THUNDER devices, including a circular and a rectangular model of different sizes constructed of varying materials. Data will be offered for a number of key performance characteristics, including displacement, block force, plus displacement vs. voltage and displacement vs. force.

  12. Productive high-performance software for OpenCL devices

    NASA Astrophysics Data System (ADS)

    Melonakos, John M.; Yalamanchili, Pavan; McClanahan, Chris; Arshad, Umar; Landes, Michael; Jamboti, Shivapriya; Joshi, Abhijit; Mohammed, Shehzan; Spafford, Kyle; Venugopalakrishnan, Vishwanath; Malcolm, James

    2013-05-01

    Over the last three decades, CPUs have continued to produce large performance improvements from one generation to the next. However, CPUs have recently hit a performance wall and need parallel computing to move forward. Parallel computing over the next decade will become increasingly defined by heterogeneous computing, involving the use of accelerators in addition to CPUs to get computational tasks done. In order to use an accelerator, software changes must be made. Regular x86-based compilers cannot compile code to run on accelerators without these needed changes. The amount of software change required varies depending upon the availability of and reliance upon software tools that increase performance and productivity. Writing software that leverages the best parallel computing hardware, adapts well to the rapid pace of hardware updates, and minimizes developer muscle is the industry's goal. OpenCL is the standard around which developers are able to achieve parallel performance. OpenCL itself is too difficult to program to receive general adoptions, but productive high-performing software libraries are becoming increasingly popular and capable in delivering lasting value to user applications.

  13. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  14. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle.

    PubMed

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  15. Rotational artifacts in on-board cone beam computed tomography

    NASA Astrophysics Data System (ADS)

    Ali, E. S. M.; Webb, R.; Nyiri, B. J.

    2015-02-01

    Rotational artifacts in image guidance systems lead to registration errors that affect non-isocentric treatments and dose to off-axis organs-at-risk. This study investigates a rotational artifact in the images acquired with the on-board cone beam computed tomography system XVI (Elekta, Stockholm, Sweden). The goals of the study are to identify the cause of the artifact, to characterize its dependence on other quantities, and to investigate possible solutions. A 30 cm diameter cylindrical phantom is used to acquire clockwise and counterclockwise scans at five speeds (120 to 360 deg min-1) on six Elekta linear accelerators from three generations (MLCi, MLCi2 and Agility). Additional scans are acquired with different pulse widths and focal spot sizes for the same mAs. Image quality is evaluated using a common phantom with an in-house three dimensional contrast transfer function attachment. A robust, operator-independent analysis is developed which quantifies rotational artifacts with 0.02° accuracy and imaging system delays with 3 ms accuracy. Results show that the artifact is caused by mislabelling of the projections with a lagging angle due to various imaging system delays. For the most clinically used scan speed (360 deg min-1), the artifact is ˜0.5°, which corresponds to ˜0.25° error per scan direction with the standard Elekta procedure for angle calibration. This leads to a 0.5 mm registration error at 11 cm off-center. The artifact increases linearly with scan speed, indicating that the system delay is independent of scan speed. For the most commonly used pulse width of 40 ms, this delay is 34 ± 1 ms, part of which is half the pulse width. Results are consistent among the three linac generations. A software solution that corrects the angles of individual projections is shown to eliminate the rotational error for all scan speeds and directions. Until such a solution is available from the manufacturer, three clinical solutions are presented, which reduce the

  16. Measuring Organic Matter with COSIMA on Board Rosetta

    NASA Astrophysics Data System (ADS)

    Briois, C.; Baklouti, D.; Bardyn, A.; Cottin, H.; Engrand, C.; Fischer, H.; Fray, N.; Godard, M.; Hilchenbach, M.; von Hoerner, H.; Höfner, H.; Hornung, K.; Kissel, J.; Langevin, Y.; Le Roy, L.; Lehto, H.; Lehto, K.; Orthous-Daunay, F. R.; Revillet, C.; Rynö, J.; Schulz, R.; Silen, J. V.; Siljeström, S.; Thirkell, L.

    2014-12-01

    Comets are believed to contain the most pristine material of our Solar System materials and therefore to be a key to understand the origin of the Solar System, and the origin of life. Remote sensing observations have led to the detection of more than twenty simple organic molecules (Bockelée-Morvan et al., 2004; Mumma and Charnley, 2011). Experiments on-board in-situ exploration missions Giotto and Vega and the recent Stardust sample return missions have shown that a significant fraction of the cometary grains consists of organic matter. Spectra showed that both the gaseous (Mitchell et al., 1992) and the solid phase (grains) (Kissel and Krueger, 1987) contained organic molecules with higher masses than those of the molecules detected by remote sensing techniques in the gaseous phase. Some of the grains analyzed in the atmosphere of comet 1P/Halley seem to be essentially made of a mixture of carbon, hydrogen, oxygen and nitrogen (CHON grains, Fomenkova, 1999). Rosetta is an unparalleled opportunity to make a real breakthrough into the nature of cometary matter, both in the gas and in the solid phase. The dust mass spectrometer COSIMA on Rosetta will analyze organic and inorganic phases in the dust. The organic phases may be refractory, but some organics may evaporate with time from the dust and lead to an extended source in the coma. Over the last years, we have prepared the cometary rendezvous by the analysis of various samples with the reference model of COSIMA. We will report on this calibration data set and on the first results of the in-situ analysis of cometary grains as captured, imaged and analyzed by COSIMA. References : Bockelée-Morvan, D., et al. 2004. (Eds.), Comets II. the University of Arizona Press, Tucson, USA, pp. 391-423 ; Fomenkova, M.N., 1999. Space Science Reviews 90, 109-114 ; Kissel, J., Krueger, F.R., 1987. Nature 326, 755-760 ; Mitchell, et al. 1992. Icarus 98, 125-133 ; Mumma, M.J., Charnley, S.B., 2011. Annual Review of Astronomy and

  17. Optical multiple access techniques for on-board routing

    NASA Astrophysics Data System (ADS)

    Mendez, Antonio J.; Park, Eugene; Gagliardi, Robert M.

    1992-03-01

    The purpose of this research contract was to design and analyze an optical multiple access system, based on Code Division Multiple Access (CDMA) techniques, for on board routing applications on a future communication satellite. The optical multiple access system was to effect the functions of a circuit switch under the control of an autonomous network controller and to serve eight (8) concurrent users at a point to point (port to port) data rate of 180 Mb/s. (At the start of this program, the bit error rate requirement (BER) was undefined, so it was treated as a design variable during the contract effort.) CDMA was selected over other multiple access techniques because it lends itself to bursty, asynchronous, concurrent communication and potentially can be implemented with off the shelf, reliable optical transceivers compatible with long term unattended operations. Temporal, temporal/spatial hybrids and single pulse per row (SPR, sometimes termed 'sonar matrices') matrix types of CDMA designs were considered. The design, analysis, and trade offs required by the statement of work selected a temporal/spatial CDMA scheme which has SPR properties as the preferred solution. This selected design can be implemented for feasibility demonstration with off the shelf components (which are identified in the bill of materials of the contract Final Report). The photonic network architecture of the selected design is based on M(8,4,4) matrix codes. The network requires eight multimode laser transmitters with laser pulses of 0.93 ns operating at 180 Mb/s and 9-13 dBm peak power, and 8 PIN diode receivers with sensitivity of -27 dBm for the 0.93 ns pulses. The wavelength is not critical, but 830 nm technology readily meets the requirements. The passive optical components of the photonic network are all multimode and off the shelf. Bit error rate (BER) computations, based on both electronic noise and intercode crosstalk, predict a raw BER of (10 exp -3) when all eight users are

  18. Legionella on board trains: effectiveness of environmental surveillance and decontamination

    PubMed Central

    2012-01-01

    Background Legionella pneumophila is increasingly recognised as a significant cause of sporadic and epidemic community-acquired and nosocomial pneumonia. Many studies describe the frequency and severity of Legionella spp. contamination in spa pools, natural pools, hotels and ships, but there is no study analysing the environmental monitoring of Legionella on board trains. The aims of the present study were to conduct periodic and precise environmental surveillance of Legionella spp. in water systems and water tanks that supply the toilet systems on trains, to assess the degree of contamination of such structures and to determine the effectiveness of decontamination. Methods A comparative pre-post ecological study was conducted from September 2006 to January 2011. A total of 1,245 water samples were collected from plumbing and toilet water tanks on passenger trains. The prevalence proportion of all positive samples was calculated. The unpaired t-test was performed to evaluate statistically significant differences between the mean load values before and after the decontamination procedures; statistical significance was set at p ≤ 0.05. Results In the pre-decontamination period, 58% of the water samples were positive for Legionella. Only Legionella pneumophila was identified: 55.84% were serogroup 1, 19.03% were serogroups 2–14 and 25.13% contained both serogroups. The mean bacterial load value was 2.14 × 103 CFU/L. During the post-decontamination period, 42.75% of water samples were positive for Legionella spp.; 98.76% were positive for Legionella pneumophila: 74.06% contained serogroup 1, 16.32% contained serogroups 2–14 and 9.62% contained both. The mean bacterial load in the post-decontamination period was 1.72 × 103 CFU/L. According to the t-test, there was a statistically significant decrease in total bacterial load until approximately one and a half year after beginning the decontamination programme (p = 0.0097). Conclusions This

  19. Optical multiple access techniques for on-board routing

    NASA Technical Reports Server (NTRS)

    Mendez, Antonio J.; Park, Eugene; Gagliardi, Robert M.

    1992-01-01

    The purpose of this research contract was to design and analyze an optical multiple access system, based on Code Division Multiple Access (CDMA) techniques, for on board routing applications on a future communication satellite. The optical multiple access system was to effect the functions of a circuit switch under the control of an autonomous network controller and to serve eight (8) concurrent users at a point to point (port to port) data rate of 180 Mb/s. (At the start of this program, the bit error rate requirement (BER) was undefined, so it was treated as a design variable during the contract effort.) CDMA was selected over other multiple access techniques because it lends itself to bursty, asynchronous, concurrent communication and potentially can be implemented with off the shelf, reliable optical transceivers compatible with long term unattended operations. Temporal, temporal/spatial hybrids and single pulse per row (SPR, sometimes termed 'sonar matrices') matrix types of CDMA designs were considered. The design, analysis, and trade offs required by the statement of work selected a temporal/spatial CDMA scheme which has SPR properties as the preferred solution. This selected design can be implemented for feasibility demonstration with off the shelf components (which are identified in the bill of materials of the contract Final Report). The photonic network architecture of the selected design is based on M(8,4,4) matrix codes. The network requires eight multimode laser transmitters with laser pulses of 0.93 ns operating at 180 Mb/s and 9-13 dBm peak power, and 8 PIN diode receivers with sensitivity of -27 dBm for the 0.93 ns pulses. The wavelength is not critical, but 830 nm technology readily meets the requirements. The passive optical components of the photonic network are all multimode and off the shelf. Bit error rate (BER) computations, based on both electronic noise and intercode crosstalk, predict a raw BER of (10 exp -3) when all eight users are

  20. DOE research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  1. On-board predicting algorithm of radiation exposure for the International Space Station radiation monitoring system

    NASA Astrophysics Data System (ADS)

    Benghin, V. V.

    2008-02-01

    Radiation monitoring system (RMS) has worked on-board the International Space Station (ISS) practically continuously beginning from August 2001. In June 2005, the RMS software was updated. New RMS software detects radiation environment worsening due to solar proton events and informs the crew about this. The algorithm of the on-board radiation environment predict is a part of the new software. This algorithm reveals dose rate increments on high-latitude parts of ISS orbit and calculates estimations of time intervals and dose rate values for ulterior crossings of high-latitude areas. A brief description of the on-board radiation exposure-predict algorithm is presented.

  2. HTML 5 Displays for On-Board Flight Systems

    NASA Technical Reports Server (NTRS)

    Silva, Chandika

    2016-01-01

    During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different

  3. On-Board Engine Exhaust Particulate Matter Sensor for HCCI and Conventional Diesel Engines

    SciTech Connect

    Hall, Matt; Matthews, Ron

    2011-09-30

    The goal of the research was to refine and complete development of an on-board particulate matter (PM) sensor for diesel, DISI, and HCCI engines, bringing it to a point where it could be commercialized and marketed.

  4. Superconducting magnet and on-board refrigeration system on Japanese MAGLEV vehicle

    SciTech Connect

    Tsuchishima, H.; Herai, T. )

    1991-03-01

    This paper reports on a superconducting magnet and on-board refrigeration system on Japanese MAGLEV vehicles. Running tests on the Miyazaki test track are repeatedly carried out at speeds over 300 km/h using the MAGLEV vehicle, MLU002. The development of the MAGLEV system for the new test line has already started, and a new superconducting magnet for it has been manufactured. An on-board refrigerator is installed in the superconducting magnet to keep the liquid helium temperature without the loss of liquid helium. The helium gas produced when energizing or de-energizing the magnet is stored in on-board gas helium tanks temporarily. The on-board refrigerator is connected directly to the liquid helium tank of the magnet.

  5. Geostationary Satellite Near-Miss Avoidance Using On-Board Monitoring

    NASA Astrophysics Data System (ADS)

    Kawase, Sei-Ichiro

    In this paper we discuss the use of a monitoring camera on board a satellite to detect unknown satellites coming so close as to cause collision risk. We assume that a camera on board our satellite tracks direction angles of an unknown target satellite and try simulations of determining the target's trajectory relative to our satellite. Simulations show that we cannot determine uniquely the target's trajectory, while showing that we can decide whether we need a collision avoidance maneuver and can determine its strategy when we need it. The on-board camera does not need precise alignment, as the bias in direction angles can be estimated as an unknown parameter. On-board monitoring can thus be practical for orbital risk avoidance.

  6. On-Board Preventive Maintenance: Analysis of Effectiveness Optimal Duty Period

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalaj, Leon; Hecht, Herbert

    1996-01-01

    To maximize reliability of a spacecraft which performs long-life (over 10-year), deep-space mission (to outer planet), a fault-tolerant environment incorporating automatic on-board preventive maintenance is highly desirable.

  7. Training High Performance Skills: Fallacies and Guidelines. Final Report.

    ERIC Educational Resources Information Center

    Schneider, Walter

    High performance skills are defined as ones: (1) which require over 100 hours of training, (2) in which a substantial number of individuals fail to develop proficiency, and (3) in which the performance of the expert is qualitatively different from that of the novice. Training programs for developing high performance skills are often based on…

  8. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.

  9. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  10. An Analysis of a High Performing School District's Culture

    ERIC Educational Resources Information Center

    Corum, Kenneth D.; Schuetz, Todd B.

    2012-01-01

    This report describes a problem based learning project focusing on the cultural elements of a high performing school district. Current literature on school district culture provides numerous cultural elements that are present in high performing school districts. With the current climate in education placing pressure on school districts to perform…

  11. Islet in weightlessness: biological experiments on board COSMOS 1129 satellite

    SciTech Connect

    Zhuk, Y.

    1980-09-01

    Biological experiments planned as an international venture for COSMOS 1129 satellite include tests of: (1) adaptation of rats to conditions of weightlessness, and readaption to Earth's gravity, (2) possibility of fertilization and embryonic development in weightlessness, (3) heat exchange processes, (4) amount of gravity force preferred by fruit flies for laying eggs (given a choice of three centrifugal zones), (5) growth of higher plants from seeds, (6) effects of weightlessness on cells in culture, and (7) radiation danger from heavy nuclei, and electrostatic protection from charged particles.

  12. Islet in weightlessness: Biological experiments on board COSMOS 1129 satellite

    NASA Technical Reports Server (NTRS)

    Zhuk, Y.

    1980-01-01

    Biological experiments planned as an international venture for COSMOS 1129 satellite include tests of: (1) adaptation of rats to conditions of weightlessness, and readaption to Earth's gravity; (2) possibility of fertilization and embryonic development in weightlessness; (3) heat exchange processes; (4) amount of gravity force preferred by fruit flies for laying eggs (given a choice of three centrifugal zones); (5) growth of higher plants from seeds; (6) effects of weightlessness on cells in culture and (7) radiation danger from heavy nuclei, and electrostatic protection from charged particles.

  13. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  14. Dosimetric verification of lung cancer treatment using the CBCTs estimated from limited-angle on-board projections

    PubMed Central

    Zhang, You; Yin, Fang-Fang; Ren, Lei

    2015-01-01

    PTV doses were 1.6% (±1.9%), 1.2% (±0.6%), 2.2% (±0.8%), and 17.4% (±15.3%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.3% (±0.2%), 0.9% (±0.6%), 0.6% (±0.4%), and 1.0% (±0.8%), respectively. Similarly, for the physical phantom study, the average ΔDmin, ΔDmax, ΔDmean, and ΔV100% of planned PTV doses were 38.1% (±30.8%), 3.5% (±5.1%), 3.0% (±2.6%), and 8.8% (±8.0%), respectively. The corresponding values of FDK PTV doses were 5.8% (±4.5%), 1.6% (±1.6%), 2.0% (±0.9%), and 9.3% (±10.5%), respectively. In contrast, the corresponding values of MM-FD PTV doses were 0.4% (±0.8%), 0.8% (±1.0%), 0.5% (±0.4%), and 0.8% (±0.8%), respectively. For the 4D dose accumulation study, the average (± standard deviation) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.3% (±2.7%). The average gamma index (3%/3 mm) between the accumulated doses and the radiochromic film measured doses was 94.5% (±2.5%). Conclusions: MM-FD estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy. PMID:26233206

  15. Investigations on-board the biosatellite Cosmos-83

    NASA Astrophysics Data System (ADS)

    Gazenko, O. G.; Ilyin, Eu. A.

    The program of the 5day flight of the biosatellite Cosmos-1514 (December 1983) envisaged experimental investigations the purpose of which was to ascertain the effect of short-term microgravity on the physiology, growth and development of various animal and plant species. The study of Rhesus-monkeys has shown that they are an adequate model for exploring the mechanisms of physiological adaptation to weightlessness of the vestibular apparatus and the cardiovascular system. The rat experiment has demonstrated that mammalian embryos, at least during the last term of pregnancy, can develop in microgravity. This finding has been confirmed by fish studies. The experiment on germinating seeds and adult plants has given evidence that microgravity produces no effect on the metabolism of seedlings and on the flowering stage.

  16. Overview of the Waveform Capture in the Lunar Radar Sounder on board KAGUYA

    NASA Astrophysics Data System (ADS)

    Kasahara, Y.; Goto, Y.; Hashimoto, K.; Imachi, T.; Kumamoto, A.; Ono, T.; Matsumoto, H.

    2007-12-01

    The Lunar explorer "gKAGUYA"h (SELENE) spacecraft will be launched on September 13, 2007. The Lunar Radar Sounder (LRS) is one of the scientific instruments on board KAGUYA. It consists of three subsystems: the sounder observation (SDR), the natural plasma wave receiver (NPW), and the waveform capture (WFC). The WFC is a high-performance and multifunctional software receiver in which most functions are realized by the onboard software implemented in a digital signal processor (DSP). The WFC consists of a fast-sweep frequency analyzer (WFC-H) covering the frequency range from 1 kHz to 1 MHz and a waveform receiver (WFC-L) in the frequency range from 10 Hz to 100 kHz. The amount of raw data from the plasma wave instrument is huge because the scientific objectives require the covering of a wide frequency range with high time and frequency resolution; furthermore, a variety of operation modes are needed to meet these scientific objectives. In addition, new techniques such as digital filtering, automatic filter selection, and data compression are implemented for data processing of the WFC-L to extract the important data adequately under the severe restriction of total amount of telemetry data. Because of the flexibility of the instruments, various kinds of observation modes can be achieved, and we expect the WFC to generate many interesting data. By taking advantage of a moon orbiter, the WFC is expected to measure plasma waves and radio emissions that are generated around the moon and/or that originated from the sun and from the earth and other planets. One of the phenomena of most interest to be obtained from the WFC data is the dynamics of lunar wake as a result of solar wind-moon interaction. Another scientific topic in the field of lunar plasma physics concerns the minimagnetosphere caused by the magnetic anomaly of the moon. There are various kinds of other plasma waves to be observed from the moon such as Auroral Kilometric Radiation, electrostatic solitary wave

  17. A Generic Scheduling Simulator for High Performance Parallel Computers

    SciTech Connect

    Yoo, B S; Choi, G S; Jette, M A

    2001-08-01

    It is well known that efficient job scheduling plays a crucial role in achieving high system utilization in large-scale high performance computing environments. A good scheduling algorithm should schedule jobs to achieve high system utilization while satisfying various user demands in an equitable fashion. Designing such a scheduling algorithm is a non-trivial task even in a static environment. In practice, the computing environment and workload are constantly changing. There are several reasons for this. First, the computing platforms constantly evolve as the technology advances. For example, the availability of relatively powerful commodity off-the-shelf (COTS) components at steadily diminishing prices have made it feasible to construct ever larger massively parallel computers in recent years [1, 4]. Second, the workload imposed on the system also changes constantly. The rapidly increasing compute resources have provided many applications developers with the opportunity to radically alter program characteristics and take advantage of these additional resources. New developments in software technology may also trigger changes in user applications. Finally, political climate change may alter user priorities or the mission of the organization. System designers in such dynamic environments must be able to accurately forecast the effect of changes in the hardware, software, and/or policies under consideration. If the environmental changes are significant, one must also reassess scheduling algorithms. Simulation has frequently been relied upon for this analysis, because other methods such as analytical modeling or actual measurements are usually too difficult or costly. A drawback of the simulation approach, however, is that developing a simulator is a time-consuming process. Furthermore, an existing simulator cannot be easily adapted to a new environment. In this research, we attempt to develop a generic job-scheduling simulator, which facilitates the evaluation of

  18. Novel high performance multispectral photodetector and its performance

    NASA Astrophysics Data System (ADS)

    Mizuno, Genki; Dutta, Jaydeep; Oduor, Patrick; Dutta, Achyut K.; Dhar, Nibir K.

    2016-05-01

    Banpil Photonics has developed a novel high-performance multispectral photodetector array for Short-Wave Infrared (SWIR) imaging. The InGaAs based device uses a unique micro-nano pillar structure that eliminates surface reflection to significantly increase sensitivity and the absorption spectra compared to its macro-scaled thin film pixels counterpart (non-pillar). We discuss the device structure and highlight fabrication of the novel high performance multispectral image sensor. We also present performance results of the device characterization showing low dark current suitable for high performance imaging applications for the most demanding security, defense, and machine vision applications.

  19. Toward a new metric for ranking high performance computing systems.

    SciTech Connect

    Heroux, Michael Allen; Dongarra, Jack.

    2013-06-01

    The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

  20. Automatic registration between reference and on-board digital tomosynthesis images for positioning verification

    SciTech Connect

    Ren Lei; Godfrey, Devon J.; Yan, Hui; Wu, Q. Jackie; Yin, Fang-Fang

    2008-02-15

    The authors developed a hybrid multiresolution rigid-body registration technique to automatically register reference digital tomosynthesis (DTS) images with on-board DTS images to guide patient positioning in radiation therapy. This hybrid registration technique uses a faster but less accurate static method to achieve an initial registration, followed by a slower but more accurate adaptive method to fine tune the registration. A multiresolution scheme is employed in the registration to further improve the registration accuracy, robustness, and efficiency. Normalized mutual information is selected as the criterion for the similarity measure and the downhill simplex method is used as the search engine. This technique was tested using image data both from an anthropomorphic chest phantom and from eight head-and-neck cancer patients. The effects of the scan angle and the region-of-interest (ROI) size on the registration accuracy and robustness were investigated. The necessity of using the adaptive registration method in the hybrid technique was validated by comparing the results of the static method and the hybrid method. With a 44 deg. scan angle and a large ROI covering the entire DTS volume, the average of the registration capture ranges in single-axis simulations was between -31 and +34 deg. for rotations and between -89 and +78 mm for translations in the phantom study, and between -38 and +38 deg. for rotations and between -58 and +65 mm for translations in the patient study. Decreasing the DTS scan angle from 44 deg. to 22 deg. mainly degraded the registration accuracy and robustness for the out-of-plane rotations. Decreasing the ROI size from the entire DTS volume to the volume surrounding the spinal cord reduced the capture ranges to between -23 and +18 deg. for rotations and between -33 and +43 mm for translations in the phantom study, and between -18 and +25 deg. for rotations and between -35 and +39 mm for translations in the patient study. Results also

  1. Towards a smart Holter system with high performance analogue front-end and enhanced digital processing.

    PubMed

    Du, Leilei; Yan, Yan; Wu, Wenxian; Mei, Qiujun; Luo, Yu; Li, Yang; Wang, Lei

    2013-01-01

    Multiple-lead dynamic ECG recorders (Holter) play an important role in the earlier detection of various cardiovascular diseases. In this paper, we present the first several steps towards a 12-lead Holter system with high-performance AFE (Analogue Front-End) and enhanced digital processing. The system incorporates an analogue front-end chip (ADS1298 from TI), which has not yet been widely used in most commercial Holter products. A highly-efficient data management module was designated to handle the data exchange between the ADS1298 and the microprocessor (STM32L151 from ST electronics). Furthermore, the system employs a Field Programmable Gate Array (Spartan-3E from Xilinx) module, on which a dedicated real-time 227-step FIR filter was executed to improve the overall filtering performance, since the ADS1298 has no high-pass filtering capability and only allows limited low-pass filtering. The Spartan-3E FPGA is also capable of offering further on-board computational ability for a smarter Holter. The results indicate that all functional blocks work as intended. In the future, we will conduct clinical trials and compare our system with other state-of-the-arts.

  2. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean  ±  SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.

  3. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation.

    PubMed

    Stewart, James M P; Ansell, Steve; Lindsay, Patricia E; Jaffray, David A

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from [Formula: see text] mm (mean  ±  SD) to [Formula: see text] mm for an isotropic imaging voxel size of 0.1 mm. PMID:26540304

  4. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    SciTech Connect

    2004-11-01

    Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

  5. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Rotor dynamic instability problems in high performance turbomachinery are reviewed. Mechanical instability mechanisms are discussed. Seal forces and working fluid forces in turbomachinery are discussed. Control of rotor instability is also investigated.

  6. Mastering the Challenge of High-Performance Computing.

    ERIC Educational Resources Information Center

    Roach, Ronald

    2003-01-01

    Discusses how, just as all of higher education got serious with wiring individual campuses for the Internet, the nation's leading research institutions have initiated "high-performance computing." Describes several such initiatives involving historically black colleges and universities. (EV)

  7. Exploring KM Features of High-Performance Companies

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Wen

    2007-12-01

    For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.

  8. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  9. Variational formulation of high performance finite elements: Parametrized variational principles

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmello

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.

  10. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  11. The Gamma-Ray Burst On-board Trigger ECLAIRs of SVOM

    NASA Astrophysics Data System (ADS)

    Schanne, Stephane

    2016-07-01

    SVOM, the Space-based multi-band astronomical Variable Objects Monitor, is a French-Chinese satellite mission for Gamma-Ray Burst studies. The conclusion of the Phase B studies is scheduled in 2016 and the launch is foreseen in 2021. With its set of 4 on-board instruments as well as dedicated ground instruments, SVOM will study GBRs in great detail, including their temporal and spectral properties from visible to gamma-rays. The coded-mask telescope ECLAIRs on-board SVOM with its Burst On-board Trigger system analyzes in real-time a 2 sr portion of the sky in the 4-120 keV energy range to detect and localize the GRBs. It then requests the spacecraft slew to allow GRB follow-up observations by the on-board narrow field-of-view telescopes MXT in X-rays and VT in the visible, and informs the community of observers via a dedicated ground network. This paper gives an update on the status of ECLAIRs and its Burst On-board Trigger system.

  12. Biotechnological experiments in space flights on board of space stations

    NASA Astrophysics Data System (ADS)

    Nechitailo, Galina S.

    2012-07-01

    Space flight conditions are stressful for any plant and cause structural-functional transition due to mobiliation of adaptivity. In space flight experiments with pea tissue, wheat and arabidopsis we found anatomical-morphological transformations and biochemistry of plants. In following experiments, tissue of stevia (Stevia rebaudiana), potato (Solanum tuberosum), callus culture and culture and bulbs of suffron (Crocus sativus), callus culture of ginseng (Panax ginseng) were investigated. Experiments with stevia carried out in special chambers. The duration of experiment was 8-14 days. Board lamp was used for illumination of the plants. After experiment the plants grew in the same chamber and after 50 days the plants were moved into artificial ionexchange soil. The biochemical analysis of plants was done. The total concentration of glycozides and ratio of stevioside and rebauside were found different in space and ground plants. In following generations of stevia after flight the total concentration of stevioside and rebauside remains higher than in ground plants. Experiments with callus culture of suffron carried out in tubes. Duration of space flight experiment was 8-167 days. Board lamp was used for illumination of the plants. We found picrocitina pigment in the space plants but not in ground plants. Tissue culture of ginseng was grown in special container in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 167 days. Biological activity of space flight culutre was in 5 times higher than the ground culture. This difference was observed after recultivation of space flight samples on Earth during year after flight. Callus tissue of potato was grown in tubes in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 14 days. Concentration of regenerates in flight samples was in 5 times higher than in ground samples. The space flight experiments show, that microgravity and other

  13. OnBoard Parameter Identification for a Small UAV

    NASA Astrophysics Data System (ADS)

    McGrail, Amanda K.

    One of the main research focus areas of the WVU Flight Control Systems Laboratory (FCSL) is the increase of flight safety through the implementation of fault tolerant control laws. For some fault tolerant flight control approaches with adaptive control laws, the availability of accurate post failure aircraft models improves performance. While look-up tables of aircraft models can be created for failure conditions, they may fail to account for all possible failure scenarios. Thus, a real-time parameter identification program eliminates the need to have predefined models for all potential failure scenarios. The goal of this research was to identify the dimensional stability and control derivatives of the WVU Phastball UAV in flight using a frequency domain based real-time parameter identification (PID) approach. The data necessary for this project was gathered using the WVU Phastball UAV, a radio-controlled aircraft designed and built by the FCSL for fault tolerant control research. Maneuvers designed to excite the natural dynamics of the aircraft were implemented by the pilot or onboard computer during the steady state portions of flights. The data from these maneuvers was used for this project. The project was divided into three main parts: 1) off-line time domain PID, 2) off-line frequency domain PID, and 3) an onboard frequency domain PID. The off-line parameter estimation programs, in both frequency domain and time domain, utilized the well known Maximum Likelihood Estimator with Newton-Raphson minimization with starting values estimated from a Least-Squares Estimate of the non-dimensional stability and control derivatives. For the frequency domain approach, both the states and inputs were first converted to the frequency domain using a Fourier integral over the frequency range in which the rigid body aircraft dynamics are found. The final phase of the project was a real-time parameter estimation program to estimate the dimensional stability and control

  14. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  15. Evaluation of the use of on-board spacecraft energy storage for electric propulsion missions

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Palmer, F. M.

    1983-01-01

    On-board spacecraft energy storage represents an under utilized resource for some types of missions that also benefit from using relatively high specific impulse capability of electric propulsion. This resource can provide an appreciable fraction of the power required for operating the electric propulsion subsystem in some missions. The most probable mission requirement for utilization of this energy is that of geostationary satellites which have secondary batteries for operating at high power levels during eclipse. The study summarized in this report selected four examples of missions that could benefit from use of electric propulsion and on-board energy storage. Engineering analyses were performed to evaluate the mass saved and economic benefit expected when electric propulsion and on-board batteries perform some propulsion maneuvers that would conventionally be provided by chemical propulsion. For a given payload mass in geosynchronous orbit, use of electric propulsion in this manner typically provides a 10% reduction in spacecraft mass.

  16. On-board Attitude Determination System (OADS). [for advanced spacecraft missions

    NASA Technical Reports Server (NTRS)

    Carney, P.; Milillo, M.; Tate, V.; Wilson, J.; Yong, K.

    1978-01-01

    The requirements, capabilities and system design for an on-board attitude determination system (OADS) to be flown on advanced spacecraft missions were determined. Based upon the OADS requirements and system performance evaluation, a preliminary on-board attitude determination system is proposed. The proposed OADS system consists of one NASA Standard IRU (DRIRU-2) as the primary attitude determination sensor, two improved NASA Standard star tracker (SST) for periodic update of attitude information, a GPS receiver to provide on-board space vehicle position and velocity vector information, and a multiple microcomputer system for data processing and attitude determination functions. The functional block diagram of the proposed OADS system is shown. The computational requirements are evaluated based upon this proposed OADS system.

  17. High Performance Schools Best Practices Manual. Volume I: Planning [and] Volume II: Design [and] Volume III: Criteria.

    ERIC Educational Resources Information Center

    Eley, Charles, Ed.

    This three-volume manual, focusing on California's K-12 public schools, presents guidelines for establishing schools that are healthy, comfortable, energy efficient, resource efficient, water efficient, secure, adaptable, and easy to operate and maintain. The first volume describes why high performance schools are important, what components are…

  18. The Type of Culture at a High Performance Schools and Low Performance School in the State of Kedah

    ERIC Educational Resources Information Center

    Daud, Yaakob; Raman, Arumugam; Don, Yahya; O. F., Mohd Sofian; Hussin, Fauzi

    2015-01-01

    This research aims to identify the type of culture at a High Performance School (HPS) and Low Performance School (LPS) in the state of Kedah. The research instrument used to measure the type of organizational culture was adapted from Organizational Culture Assessment Instrument (Cameron & Quinn, 2006) based on Competing Values Framework Quinn…

  19. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  20. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  1. Adaptive beamforming in a CDMA mobile satellite communications system

    NASA Technical Reports Server (NTRS)

    Munoz-Garcia, Samuel G.

    1993-01-01

    Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.

  2. Real-Time On-Board HMS/Inspection Capability for Propulsion and Power Systems

    NASA Technical Reports Server (NTRS)

    Barkhoudarian, Sarkis

    2005-01-01

    Presently, the evaluation of the health of space propulsion systems includes obtaining and analyzing limited flight data and extensive post flight performance, operational and inspection data. This approach is not practical for deep-space missions due to longer operational times, lack of in-space inspection facility, absence of timely ground commands and very long repair intervals. This paper identifies the on-board health- management/inspection needs of deep-space propulsion and thermodynamic power-conversion systems. It also describes technologies that could provide on-board inspection and more comprehensive health management for more successful missions.

  3. Conceptual design of an on-board optical processor with components

    NASA Technical Reports Server (NTRS)

    Walsh, J. R.; Shackelford, R. G.

    1977-01-01

    The specification of components for a spacecraft on-board optical processor was investigated. A space oriented application of optical data processing and the investigation of certain aspects of optical correlators were examined. The investigation confirmed that real-time optical processing has made significant advances over the past few years, but that there are still critical components which will require further development for use in an on-board optical processor. The devices evaluated were the coherent light valve, the readout optical modulator, the liquid crystal modulator, and the image forming light modulator.

  4. Safety in earth orbit study. Volume 2: Analysis of hazardous payloads, docking, on-board survivability

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Detailed and supporting analyses are presented of the hazardous payloads, docking, and on-board survivability aspects connected with earth orbital operations of the space shuttle program. The hazards resulting from delivery, deployment, and retrieval of hazardous payloads, and from handling and transport of cargo between orbiter, sortie modules, and space station are identified and analyzed. The safety aspects of shuttle orbiter to modular space station docking includes docking for assembly of space station, normal resupply docking, and emergency docking. Personnel traffic patterns, escape routes, and on-board survivability are analyzed for orbiter with crew and passenger, sortie modules, and modular space station, under normal, emergency, and EVA and IVA operations.

  5. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  6. Military applications for high-performance thermal imaging

    NASA Astrophysics Data System (ADS)

    McEwan, Ken

    2015-01-01

    The recent developments in high-performance infrared sensor technology are opening up new opportunities for exploitation in the defence and security domains. In this paper, the focal plane array developments in the UK on low noise techniques, avalanche photodiodes, high operating temperature devices and large format cameras are reviewed and impact upon military capability is discussed. These technological developments are focused towards enduring challenges including the stand-off identification of hazardous materials and long range target recognition and are enabling exploitation of high performance thermal imaging onto a wide range of smaller platforms.

  7. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  8. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  9. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  10. Advances in Experiment Design for High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Morelli, Engene A.

    1998-01-01

    A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.

  11. Application of the Payload Data Processing and Storage System to MOSREM Multi-Processor On-Board System for Robotic Exploration Missions

    NASA Astrophysics Data System (ADS)

    Jameux, D.

    (TOS-MMA) section of the European Space Agency. Spacecraft autonomy requires massive processing power that is not available in nowadays space-qualified processors, nor will be for some time. On the other INTRODUCTION hand, leading civil/military processors and processing modules, though providing the numerical power needed To fulfil the growing needs of processing high for autonomy, cannot work reliably in the harsh space quantities of data on-board satellites, payload engineers environment. have come to a common approach. They design sets of data acquisition, storing and processing elements for on- ESA has defined in co-operation with Industry a board applications that form what ESA calls a Payload reference architecture for payload processing that Data Processing and Storage System (PDPSS). decomposes the global payload data processing system in nodes interconnected by high-speed serial links (SpaceWire). This architecture may easily integrate heterogeneous processors, thus providing the ideal framework for combining the environmental ruggedness of space processors (e.g. ERC32) with the shear computing power of industrial processors. This paper presents ESA's near future efforts for the development of an environmentally sound and high performance on-board computer system, composed of COTS and space-qualified hardware, in which the desired characteristics (high computing power and high Figure

  12. Design of a new high-performance pointing controller for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1993-01-01

    A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.

  13. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  14. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  15. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  16. Cray XMT Brings New Energy to High-Performance Computing

    SciTech Connect

    Chavarría-Miranda, Daniel; Gracio, Deborah K.; Marquez, Andres; Nieplocha, Jaroslaw; Scherrer, Chad; Sofia, Heidi J.

    2008-09-30

    The ability to solve our nation’s most challenging problems—whether it’s cleaning up the environment, finding alternative forms of energy or improving public health and safety—requires new scientific discoveries. High performance experimental and computational technologies from the past decade are helping to accelerate these scientific discoveries, but they introduce challenges of their own. The vastly increasing volumes and complexities of experimental and computational data pose significant challenges to traditional high-performance computing (HPC) platforms as terabytes to petabytes of data must be processed and analyzed. And the growing complexity of computer models that incorporate dynamic multiscale and multiphysics phenomena place enormous demands on high-performance computer architectures. Just as these new challenges are arising, the computer architecture world is experiencing a renaissance of innovation. The continuing march of Moore’s law has provided the opportunity to put more functionality on a chip, enabling the achievement of performance in new ways. Power limitations, however, will severely limit future growth in clock rates. The challenge will be to obtain greater utilization via some form of on-chip parallelism, but the complexities of emerging applications will require significant innovation in high-performance architectures. The Cray XMT, the successor to the Tera/Cray MTA, provides an alternative platform for addressing computations that stymie current HPC systems, holding the potential to substantially accelerate data analysis and predictive analytics for many complex challenges in energy, national security and fundamental science that traditional computing cannot do.

  17. A Research and Development Strategy for High Performance Computing.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…

  18. Seeking Solution: High-Performance Computing for Science. Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for Science,"…

  19. 24 CFR 902.71 - Incentives for high performers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... designated a high performer will be relieved of specific HUD requirements (e.g., will receive fewer reviews... for the physical condition, financial condition, and management operations indicators, and at least 50... in effect, such as those for competitive bidding or competitive negotiation (see 24 CFR 85.36)....

  20. TMD-Based Structural Control of High Performance Steel Bridges

    NASA Astrophysics Data System (ADS)

    Kim, Tae Min; Kim, Gun; Kyum Kim, Moon

    2012-08-01

    The purpose of this study is to investigate the effectiveness of structural control using tuned mass damper (TMD) for suppressing excessive traffic induced vibration of high performance steel bridge. The study considered 1-span steel plate girder bridge and bridge-vehicle interaction using HS-24 truck model. A numerical model of steel plate girder, traffic load, and TMD is constructed and time history analysis is performed using commercial structural analysis program ABAQUS 6.10. Results from analyses show that high performance steel bridge has dynamic serviceability problem, compared to relatively low performance steel bridge. Therefore, the structural control using TMD is implemented in order to alleviate dynamic serviceability problems. TMD is applied to the bridge with high performance steel and then vertical vibration due to dynamic behavior is assessed again. In consequent, by using TMD, it is confirmed that the residual amplitude is appreciably reduced by 85% in steady-state vibration. Moreover, vibration serviceability assessment using 'Reiher-Meister Curve' is also remarkably improved. As a result, this paper provides the guideline for economical design of I-girder using high performance steel and evaluates the effectiveness of structural control using TMD, simultaneously.

  1. 7 CFR 275.24 - High performance bonuses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... High performance bonuses. (a) General rule. (1) FNS will award bonuses totaling $48 million for each... is outlined in paragraph (b)(1) of this section), negative error rate (which is outlined in paragraph... outlined in paragraphs (b)(1) through (b)(4) of this section, FNS will add the additional State(s) into...

  2. 7 CFR 275.24 - High performance bonuses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... High performance bonuses. (a) General rule. (1) FNS will award bonuses totaling $48 million for each... is outlined in paragraph (b)(1) of this section), negative error rate (which is outlined in paragraph... outlined in paragraphs (b)(1) through (b)(4) of this section, FNS will add the additional State(s) into...

  3. 7 CFR 275.24 - High performance bonuses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... High performance bonuses. (a) General rule. (1) FNS will award bonuses totaling $48 million for each... is outlined in paragraph (b)(1) of this section), negative error rate (which is outlined in paragraph... outlined in paragraphs (b)(1) through (b)(4) of this section, FNS will add the additional State(s) into...

  4. 7 CFR 275.24 - High performance bonuses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... High performance bonuses. (a) General rule. (1) FNS will award bonuses totaling $48 million for each... is outlined in paragraph (b)(1) of this section), negative error rate (which is outlined in paragraph... outlined in paragraphs (b)(1) through (b)(4) of this section, FNS will add the additional State(s) into...

  5. 7 CFR 275.24 - High performance bonuses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... High performance bonuses. (a) General rule. (1) FNS will award bonuses totaling $48 million for each... is outlined in paragraph (b)(1) of this section), negative error rate (which is outlined in paragraph... outlined in paragraphs (b)(1) through (b)(4) of this section, FNS will add the additional State(s) into...

  6. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  7. Quantification of Tea Flavonoids by High Performance Liquid Chromatography

    ERIC Educational Resources Information Center

    Freeman, Jessica D.; Niemeyer, Emily D.

    2008-01-01

    We have developed a laboratory experiment that uses high performance liquid chromatography (HPLC) to quantify flavonoid levels in a variety of commercial teas. Specifically, this experiment analyzes a group of flavonoids known as catechins, plant-derived polyphenolic compounds commonly found in many foods and beverages, including green and black…

  8. Mechanisms to create high performance pseudo-ductile composites

    NASA Astrophysics Data System (ADS)

    Wisnom, M. R.

    2016-07-01

    Current composites normally fail suddenly and catastrophically, which is an undesirable characteristic for many applications. This paper describes work as part of the High Performance Ductile Composite Technology programme (HiPerDuCT) on mechanisms to overcome this key limitation and introduce pseudo-ductility into the failure process.

  9. The Case for High-Performance, Healthy Green Schools

    ERIC Educational Resources Information Center

    Carter, Leesa

    2011-01-01

    When trying to reach their sustainability goals, schools and school districts often run into obstacles, including financing, training, and implementation tools. Last fall, the U.S. Green Building Council-Georgia (USGBC-Georgia) launched its High Performance, Healthy Schools (HPHS) Program to help Georgia schools overcome those obstacles. By…

  10. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  11. Guide to School Design: Healthy + High Performance Schools

    ERIC Educational Resources Information Center

    Healthy Schools Network, Inc., 2007

    2007-01-01

    A "healthy and high performance school" uses a holistic design process to promote the health and comfort of children and school employees, as well as conserve resources. Children may spend over eight hours a day at school with little, if any, legal protection from environmental hazards. Schools are generally not well-maintained; asthma is a…

  12. National Best Practices Manual for Building High Performance Schools

    ERIC Educational Resources Information Center

    US Department of Energy, 2007

    2007-01-01

    The U.S. Department of Energy's Rebuild America EnergySmart Schools program provides school boards, administrators, and design staff with guidance to help make informed decisions about energy and environmental issues important to school systems and communities. "The National Best Practices Manual for Building High Performance Schools" is a part of…

  13. Maintaining High-Performance Schools after Construction or Renovation

    ERIC Educational Resources Information Center

    Luepke, Gary; Ronsivalli, Louis J., Jr.

    2009-01-01

    With taxpayers' considerable investment in schools, it is critical for school districts to preserve their community's assets with new construction or renovation and effective facility maintenance programs. "High-performance" school buildings are designed to link the physical environment to positive student achievement while providing such benefits…

  14. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  15. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  16. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  17. High Performance Lasers and LEDs for Optical Communication

    NASA Astrophysics Data System (ADS)

    Nelson, R. J.

    1987-01-01

    High performance 1.3 um lasers and LEDs have been developed for optical communications systems. The lasers exhibit low threshold currents, excellent high speed and spectral characteristics, and high reliability. The surface emitting LEDs provide launched powers greater than -15 dBm into 62.5 um core fiber with rise and fall times suitable for operation to 220 Mb/s.

  18. Two Profiles of the Dutch High Performing Employee

    ERIC Educational Resources Information Center

    de Waal, A. A.; Oudshoorn, Michella

    2015-01-01

    Purpose: The purpose of this study is to explore the profile of an ideal employee, to be more precise the behavioral characteristics of the Dutch high-performing employee (HPE). Organizational performance depends for a large part on the commitment of employees. Employees provide their knowledge, skills, experiences and creativity to the…

  19. Training Needs for High Performance in the Automotive Industry.

    ERIC Educational Resources Information Center

    Clyne, Barry; And Others

    A project was conducted in Australia to identify the training needs of the emerging industry required to support the development of the high performance areas of the automotive machining and reconditioning field especially as it pertained to auto racing. Data were gathered through a literature search, interviews with experts in the field, and…

  20. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  1. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  2. Promoting High-Performance Computing and Communications. A CBO Study.

    ERIC Educational Resources Information Center

    Webre, Philip

    In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…

  3. Understanding the Work and Learning of High Performance Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Cliff J.

    2012-01-01

    Background: The development of high performance sports coaches has been proposed as a major imperative in the professionalization of sports coaching. Accordingly, an increasing body of research is beginning to address the question of how coaches learn. While this is important work, an understanding of how coaches learn must be underpinned by an…

  4. Recruiting, Training, and Retaining High-Performance Development Teams

    ERIC Educational Resources Information Center

    Elder, Stephen D.

    2010-01-01

    This chapter offers thoughts on some key elements of a high-performing development environment. The author describes how good development officers love to be part of something big, something that transforms a place and its people, and that thinking big is a powerful concept for development officers. He reminds development officers to be clear…

  5. High-performance perovskite-graphene hybrid photodetector.

    PubMed

    Lee, Youngbin; Kwon, Jeong; Hwang, Euyheon; Ra, Chang-Ho; Yoo, Won Jong; Ahn, Jong-Hyun; Park, Jong Hyeok; Cho, Jeong Ho

    2015-01-01

    A high-performance novel photodetector is demonstrated, which consists of graphene and CH3 NH3 PbI3 perovskite layers. The resulting hybrid photodetector exhibits a dramatically enhanced photo responsivity (180 A/W) and effective quantum efficiency (5× 10(4) %) over a broad bandwidth within the UV and visible ranges.

  6. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  7. Implementing High Performance Remote Method Invocation in CCA

    SciTech Connect

    Yin, Jian; Agarwal, Khushbu; Krishnan, Manoj Kumar; Chavarría-Miranda, Daniel; Gorton, Ian; Epperly, Thomas G.

    2011-09-30

    We report our effort in engineering a high performance remote method invocation (RMI) mechanism for the Common Component Architecture (CCA). This mechanism provides a highly efficient and easy-to-use mechanism for distributed computing in CCA, enabling CCA applications to effectively leverage parallel systems to accelerate computations. This work is built on the previous work of Babel RMI. Babel is a high performance language interoperability tool that is used in CCA for scientific application writers to share, reuse, and compose applications from software components written in different programming languages. Babel provides a transparent and flexible RMI framework for distributed computing. However, the existing Babel RMI implementation is built on top of TCP and does not provide the level of performance required to distribute fine-grained tasks. We observed that the main reason the TCP based RMI does not perform well is because it does not utilize the high performance interconnect hardware on a cluster efficiently. We have implemented a high performance RMI protocol, HPCRMI. HPCRMI achieves low latency by building on top of a low-level portable communication library, Aggregated Remote Message Copy Interface (ARMCI), and minimizing communication for each RMI call. Our design allows a RMI operation to be completed by only two RDMA operations. We also aggressively optimize our system to reduce copying. In this paper, we discuss the design and our experimental evaluation of this protocol. Our experimental results show that our protocol can improve RMI performance by an order of magnitude.

  8. Themes Found in High Performing Schools: The CAB Model

    ERIC Educational Resources Information Center

    Sanders, Brenda

    2010-01-01

    This study examines the CAB [Cooperativeness, Accountability, and Boundlessness] model of high performing schools by developing case studies of two Portland, Oregon area schools. In pursuing this purpose, this study answers the following three research questions: 1) To what extent is the common correlate cooperativeness demonstrated or absent in…

  9. Cobra Strikes! High-Performance Car Inspires Students, Markets Program

    ERIC Educational Resources Information Center

    Jenkins, Bonita

    2008-01-01

    Nestled in the Lower Piedmont region of upstate South Carolina, Piedmont Technical College (PTC) is one of 16 technical colleges in the state. Automotive technology is one of its most popular programs. The program features an instructive, motivating activity that the author describes in this article: building a high-performance car. The Cobra…

  10. High Performance Skiing. How to Become a Better Alpine Skier.

    ERIC Educational Resources Information Center

    Yacenda, John

    This book is intended for people who desire to improve their skiing by exploring high performance techniques leading to: (1) more consistent performance; (2) less fatigue and more endurance; (3) greater strength and flexibility; (4) greater versatility; (5) greater confidence in all skiing conditions; and (6) the knowledge to participate in…

  11. The role of interpreters in high performance computing

    SciTech Connect

    Naumann, Axel; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  12. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Diagnostic and remedial methods concerning rotordynamic instability problems in high performance turbomachinery are discussed. Instabilities due to seal forces and work-fluid forces are identified along with those induced by rotor bearing systems. Several methods of rotordynamic control are described including active feedback methods, the use of elastometric elements, and the use of hydrodynamic journal bearings and supports.

  13. Mallow carotenoids determined by high-performance liquid chromatography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mallow (corchorus olitorius) is a green vegetable, which is widely consumed either fresh or dry by Middle East population. This study was carried out to determine the contents of major carotenoids quantitatively in mallow, by using a High Performance Liquid Chromatography (HPLC) equipped with a Bis...

  14. High-Performance Liquid Chromatography-Mass Spectrometry.

    ERIC Educational Resources Information Center

    Vestal, Marvin L.

    1984-01-01

    Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…

  15. Marine Technician's Handbook, Instructions for Taking Air Samples on Board Ship: Carbon Dioxide Project.

    ERIC Educational Resources Information Center

    Keeling, Charles D.

    This booklet is one of a series intended to provide explicit instructions for the collection of oceanographic data and samples at sea. The methods and procedures described have been used by the Scripps Institution of Oceanography and found reliable and up-to-date. Instructions are given for taking air samples on board ship to determine the…

  16. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  17. 14 CFR 121.393 - Crewmember requirements at stops where passengers remain on board.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Crewmember requirements at stops where... Crewmember Requirements § 121.393 Crewmember requirements at stops where passengers remain on board. At stops... stop, that flight attendant or other qualified person shall be located in accordance with...

  18. 14 CFR 121.393 - Crewmember requirements at stops where passengers remain on board.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Crewmember requirements at stops where... Crewmember Requirements § 121.393 Crewmember requirements at stops where passengers remain on board. At stops... stop, that flight attendant or other qualified person shall be located in accordance with...

  19. 14 CFR 121.393 - Crewmember requirements at stops where passengers remain on board.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Crewmember requirements at stops where... Crewmember Requirements § 121.393 Crewmember requirements at stops where passengers remain on board. At stops... stop, that flight attendant or other qualified person shall be located in accordance with...

  20. 14 CFR 121.393 - Crewmember requirements at stops where passengers remain on board.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Crewmember requirements at stops where... Crewmember Requirements § 121.393 Crewmember requirements at stops where passengers remain on board. At stops... stop, that flight attendant or other qualified person shall be located in accordance with...

  1. 14 CFR 121.393 - Crewmember requirements at stops where passengers remain on board.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Crewmember requirements at stops where... Crewmember Requirements § 121.393 Crewmember requirements at stops where passengers remain on board. At stops... stop, that flight attendant or other qualified person shall be located in accordance with...

  2. Evaluation of the on-board module building cotton harvest systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The "on-board" module building systems from Case IH (Module Express 625 [ME 625]) and a system under final testing by John Deere (7760) represent the most radical change in the seed cotton handling and harvest system since the module builder was introduced over 30 years ago. The Module Express 625 c...

  3. 77 FR 54651 - Study on the Use of Cell Phones On Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Federal Aviation Administration Study on the Use of Cell Phones On Board Aircraft AGENCY: Federal Aviation... on the impact of the use of cell phones for voice communications in an aircraft during a flight in... November 5, 2012. ADDRESSES: Send comments identified as Cell Phone Study Comments using any of...

  4. 29 CFR 1915.506 - Hazards of fixed extinguishing systems on board vessels and vessel sections.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Hazards of fixed extinguishing systems on board vessels and vessel sections. 1915.506 Section 1915.506 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) OCCUPATIONAL SAFETY AND HEALTH STANDARDS FOR SHIPYARD EMPLOYMENT...

  5. Astronauts Schirra and Stafford talk to crewmen on board the U.S.S. Wasp

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Astronauts Walter M. Schirra Jr. (left), command pilot, and Thomas P. Stafford, pilot, talk to crewmen on board the aircraft carrier U.S.S. Wasp after successful recovery of the Gemini 6 spacecraft. Note the cake with a model of the Gemini spacecraft in its center, which is positioned in front of the astronauts.

  6. Re-scheduling as a tool for the power management on board a spacecraft

    NASA Technical Reports Server (NTRS)

    Albasheer, Omar; Momoh, James A.

    1995-01-01

    The scheduling of events on board a spacecraft is based on forecast energy levels. The real time values of energy may not coincide with the forecast values; consequently, a dynamic revising to the allocation of power is needed. The re-scheduling is also needed for other reasons on board a spacecraft like the addition of new event which must be scheduled, or a failure of an event due to many different contingencies. This need of rescheduling is very important to the survivability of the spacecraft. In this presentation, a re-scheduling tool will be presented as a part of an overall scheme for the power management on board a spacecraft from the allocation of energy point of view. The overall scheme is based on the optimal use of energy available on board a spacecraft using expert systems combined with linear optimization techniques. The system will be able to schedule maximum number of events utilizing most energy available. The outcome is more events scheduled to share the operation cost of that spacecraft. The system will also be able to re-schedule in case of a contingency with minimal time and minimal disturbance of the original schedule. The end product is a fully integrated planning system capable of producing the right decisions in short time with less human error. The overall system will be presented with the re-scheduling algorithm discussed in detail, then the tests and results will be presented for validations.

  7. On-Board File Management and Its Application in Flight Operations

    NASA Technical Reports Server (NTRS)

    Kuo, N.

    1998-01-01

    In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.

  8. Forbush decrease effects on radiation dose received on-board aeroplanes.

    PubMed

    Lantos, P

    2005-01-01

    Doses received on-board aeroplanes during deep Forbush decreases (FDs) have been recently measured and published. Using an operational model of dose calculation, the effects on aviation dose of the FDs observed from 1981 to 2003 using neutron monitors are studied and a simplified method to estimate dose variations from galactic cosmic ray variations during FDs is derived.

  9. On-Board Cryosphere Change Detection With The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Chien, S.; Davies, A.; Tran, D.; Mazzoni, D.; Baker, V.; Dohm, J.; Ip, F.

    2006-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1 with the Hyperion hyper-spectral visible to short-wave infrared spectrometer. ASE science activities include autonomous monitoring of cryospheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. A cryosphere classification algorithm, developed with Support Vector Machine (SVM) machine learning techniques [1], replacing a manually derived classifier used in earlier operations [2], has been used in conjunction with on-board autonomous software application to execute over three hundred on-board scenarios in 2005 and early 2006, to detect and autonomously respond to sea ice break-up and formation, lake freeze and thaw, as well as the onset and melting of snow cover on land. This demonstrates an approach which could be applied to the monitoring of cryospheres on Earth and Mars as well as the search for dynamic activity on the icy moons of the outer Solar System. [1] Castano et al. (2006) Onboard classifiers for science event detection on a remote-sensing spacecraft, KDD '06, Aug 20-23 2006, Philadelphia, PA. [2] Doggett et al. (2006), Autonomous detection of cryospheric change with Hyperion on-board Earth Observing-1, Rem. Sens. Env., 101, 447-462.

  10. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs? (a...-board wheelchair. The Aerospatiale/Aeritalia ATR-72 and the British Aerospace Advanced Turboprop...

  11. Development of On-Board Fluid Analysis for the Mining Industry - Final report

    SciTech Connect

    Pardini, Allan F.

    2005-08-16

    Pacific Northwest National Laboratory (PNNL: Operated by Battelle Memorial Institute for the Department of Energy) is working with the Department of Energy (DOE) to develop technology for the US mining industry. PNNL was awarded a three-year program to develop automated on-board/in-line or on-site oil analysis for the mining industry.

  12. Incorporate design of on-board network and inter-satellite network

    NASA Astrophysics Data System (ADS)

    Li, Bin; You, Zheng; Zhang, Chenguang

    2005-11-01

    In satellite, Data transferring is very important and must be reliable. This paper first introduced an on-board network based on Control Area Network (CAN). As a kind of field bus, CAN is simple and reliable, and has been tested by previous flights. In this paper, the CAN frame is redefined, including the identifier and message data, the addresses for source and destination as well as the frame types. On-board network provides datagram transmission and buffer transmission. Data gram transmission is used to carry out TTC functions, and buffer transmission is used to transfer mass data such as images. Inter-satellite network for satellite formation flying is not designed individually. It takes the advantage of TCP/IP model and inherits and extends on-board network protocols. The inter-satellite network includes a linkage layer, a network layer and a transport layer. There are 8 virtual channels for various space missions or requirements and 4 kinds of services to be selected. The network layer is designed to manage the whole net, calculate and select the route table and gather the network information, while the transport layer mainly routes data, which correspondingly makes it possible for communication between each two nodes. Structures of the linkage frame and transport layer data segment are similar, thus there is no complex packing and unpacking. At last, this paper gives the methods for data conversion between the on-board network and the inter-satellite network.

  13. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs? (a...-board wheelchair. The Aerospatiale/Aeritalia ATR-72 and the British Aerospace Advanced Turboprop...

  14. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... containers must be removed from a hold before any truck motor is started and the trucks are placed in... 49 Transportation 2 2012-10-01 2012-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on...

  15. 75 FR 739 - Use of Additional Portable Oxygen Concentrator Devices on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-06

    ..., ``Use of Certain Portable Oxygen Concentrator Devices on Board Aircraft'' (70 FR 40156). SFAR 106 is the result of a notice the FAA published in July 2004 (69 FR 42324) to address the needs of passengers who... Inogen, Inc.'s Inogen One POCs. SFAR 106 was amended on September 12, 2006, (71 FR 53954) to add...

  16. 49 CFR 1133.2 - Statement of claimed damages based on Board findings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... claims only on shipments covered by the findings in the docket above described and contains no claim for... findings. 1133.2 Section 1133.2 Transportation Other Regulations Relating to Transportation (Continued... Statement of claimed damages based on Board findings. (a) When the Board finds that damages are due,...

  17. 49 CFR 1133.2 - Statement of claimed damages based on Board findings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... shipments covered by the findings in the docket above described and contains no claim for reparation... findings. 1133.2 Section 1133.2 Transportation Other Regulations Relating to Transportation (Continued... Statement of claimed damages based on Board findings. (a) When the Board finds that damages are due,...

  18. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation No. 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND... SUPPLEMENTAL OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation No. 106—Rules for use...

  19. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR... OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation 106—Rules for use of portable...

  20. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR... OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation 106—Rules for use of portable...

  1. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation No. 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND... SUPPLEMENTAL OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation No. 106—Rules for use...

  2. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR... OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation 106—Rules for use of portable...

  3. Concept for On-Board Safe Landing Target Selection and Landing for the Mars 2020 Mission

    NASA Astrophysics Data System (ADS)

    Brugarolas, P.; Chen, A.; Johnson, A.; Casoliva, J.; Singh, G.; Stehura, A.; Way, D.; Dutta, S.

    2014-06-01

    We present a concept for a potential enhancement to Mars 2020 to enable landing on hazardous landing sites. It adds to MSL-EDL the capability to select and divert to a safe site through on-board terrain relative localization and target selection.

  4. Approaches to High-Performance Preparative Chromatography of Proteins

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Liu, Fu-Feng; Shi, Qing-Hong

    Preparative liquid chromatography is widely used for the purification of chemical and biological substances. Different from high-performance liquid chromatography for the analysis of many different components at minimized sample loading, high-performance preparative chromatography is of much larger scale and should be of high resolution and high capacity at high operation speed and low to moderate pressure drop. There are various approaches to this end. For biochemical engineers, the traditional way is to model and optimize a purification process to make it exert its maximum capability. For high-performance separations, however, we need to improve chromatographic technology itself. We herein discuss four approaches in this review, mainly based on the recent studies in our group. The first is the development of high-performance matrices, because packing material is the central component of chromatography. Progress in the fabrication of superporous materials in both beaded and monolithic forms are reviewed. The second topic is the discovery and design of affinity ligands for proteins. In most chromatographic methods, proteins are separated based on their interactions with the ligands attached to the surface of porous media. A target-specific ligand can offer selective purification of desired proteins. Third, electrochromatography is discussed. An electric field applied to a chromatographic column can induce additional separation mechanisms besides chromatography, and result in electrokinetic transport of protein molecules and/or the fluid inside pores, thus leading to high-performance separations. Finally, expanded-bed adsorption is described for process integration to reduce separation steps and process time.

  5. Myocardial Strain Imaging with High-Performance Adaptive Dynamic Grid Interpolation Method

    NASA Astrophysics Data System (ADS)

    Shuhui Bu,; Makoto Yamakawa,; Tsuyoshi Shiina,

    2010-07-01

    The accurate assessment of local myocardial strain is important for diagnosing ischemic heart diseases because decreased myocardial motion often appears in the early stage. Calculating the spatial derivation of displacement is a necessary step in the strain calculation, but the numerical calculation is extremely sensitive to noise. Commonly used smoothing methods are the moving-average and median filters; however, these methods have a trade-off between spatial resolution and accuracy. A novel smoothing/fitting method is proposed for overcoming this problem. In this method, the detected displacement vectors are discretized at mesh nodes, and virtual springs are connected between adjacent nodes. By controlling the elasticity of the virtual springs, misdetected displacements are fitted without the above problem. Further improvements can be achieved by applying a Kalman filter for position tracking, and then calculating the strain from the accumulated displacement vectors. From the simulation results, we conclude that the proposed method improves the accuracy and spatial resolution of the strain images.

  6. Survey of manufacturers of high-performance heat engines adaptable to solar applications

    NASA Technical Reports Server (NTRS)

    Stine, W. B.

    1984-01-01

    The results of an industry survey made during the summer of 1983 are summarized. The survey was initiated in order to develop an information base on advanced engines that could be used in the solar thermal dish-electric program. Questionnaires inviting responses were sent to 39 companies known to manufacture or integrate externally heated engines. Follow-up telephone communication ensured uniformity of response. It appears from the survey that the technology exists to produce external-heat-addition engines of appropriate size with thermal efficiencies of over 40%. Problem areas are materials and sealing.

  7. On-Board Fiber-Optic Network Architectures for Radar and Avionics Signal Distribution

    NASA Technical Reports Server (NTRS)

    Alam, Mohammad F.; Atiquzzaman, Mohammed; Duncan, Bradley B.; Nguyen, Hung; Kunath, Richard

    2000-01-01

    Continued progress in both civil and military avionics applications is overstressing the capabilities of existing radio-frequency (RF) communication networks based on coaxial cables on board modem aircrafts. Future avionics systems will require high-bandwidth on- board communication links that are lightweight, immune to electromagnetic interference, and highly reliable. Fiber optic communication technology can meet all these challenges in a cost-effective manner. Recently, digital fiber-optic communication systems, where a fiber-optic network acts like a local area network (LAN) for digital data communications, have become a topic of extensive research and development. Although a fiber-optic system can be designed to transport radio-frequency (RF) signals, the digital fiber-optic systems under development today are not capable of transporting microwave and millimeter-wave RF signals used in radar and avionics systems on board an aircraft. Recent advances in fiber optic technology, especially wavelength division multiplexing (WDM), has opened a number of possibilities for designing on-board fiber optic networks, including all-optical networks for radar and avionics RF signal distribution. In this paper, we investigate a number of different novel approaches for fiber-optic transmission of on-board VHF and UHF RF signals using commercial off-the-shelf (COTS) components. The relative merits and demerits of each architecture are discussed, and the suitability of each architecture for particular applications is pointed out. All-optical approaches show better performance than other traditional approaches in terms of signal-to-noise ratio, power consumption, and weight requirements.

  8. Dynamic neural networks based on-line identification and control of high performance motor drives

    NASA Technical Reports Server (NTRS)

    Rubaai, Ahmed; Kotaru, Raj

    1995-01-01

    In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.

  9. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  10. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  11. Micro-polarimeter for high performance liquid chromatography

    DOEpatents

    Yeung, Edward E.; Steenhoek, Larry E.; Woodruff, Steven D.; Kuo, Jeng-Chung

    1985-01-01

    A micro-polarimeter interfaced with a system for high performance liquid chromatography, for quantitatively analyzing micro and trace amounts of optically active organic molecules, particularly carbohydrates. A flow cell with a narrow bore is connected to a high performance liquid chromatography system. Thin, low birefringence cell windows cover opposite ends of the bore. A focused and polarized laser beam is directed along the longitudinal axis of the bore as an eluent containing the organic molecules is pumped through the cell. The beam is modulated by air gap Faraday rotators for phase sensitive detection to enhance the signal to noise ratio. An analyzer records the beams's direction of polarization after it passes through the cell. Calibration of the liquid chromatography system allows determination of the quantity of organic molecules present from a determination of the degree to which the polarized beam is rotated when it passes through the eluent.

  12. Rotordynamic Instability Problems in High-Performance Turbomachinery, 1986

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The first rotordynamics workshop proceedings (NASA CP-2133, 1980) emphasized a feeling of uncertainty in predicting the stability of characteristics of high-performance turbomachinery. In the second workshop proceedings (NASA CP-2250, 1982) these uncertainities were reduced through programs established to systematically resolve problems, with emphasis on experimental validiation of the forces that influence rotordynamics. In third proceedings (NASA CP-2338, 1984) many programs for predicting or measuring forces and force coefficients in high-performance turbomachinery produced results. Data became available for designing new machines with enhanced stability characteristics or for upgrading existing machines. The present workshop proceedings illustrates a continued trend toward a more unified view of rotordynamic instability problems and several encouraging new analytical developments.

  13. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  14. Using high-performance networks to enable computational aerosciences applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1992-01-01

    One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.

  15. Class of service in the high performance storage system

    SciTech Connect

    Louis, S.; Teaff, D.

    1995-01-10

    Quality of service capabilities are commonly deployed in archival mass storage systems as one or more client-specified parameters to influence physical location of data in multi-level device hierarchies for performance or cost reasons. The capabilities of new high-performance storage architectures and the needs of data-intensive applications require better quality of service models for modern storage systems. HPSS, a new distributed, high-performance, scalable, storage system, uses a Class of Service (COS) structure to influence system behavior. The authors summarize the design objectives and functionality of HPSS and describes how COS defines a set of performance, media, and residency attributes assigned to storage objects managed by HPSS servers. COS definitions are used to provide appropriate behavior and service levels as requested (or demanded) by storage system clients. They compare the HPSS COS approach with other quality of service concepts and discuss alignment possibilities.

  16. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  17. GPU-based high-performance computing for radiation therapy.

    PubMed

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  18. Materials integration issues for high performance fusion power systems.

    SciTech Connect

    Smith, D. L.

    1998-01-14

    One of the primary requirements for the development of fusion as an energy source is the qualification of materials for the frost wall/blanket system that will provide high performance and exhibit favorable safety and environmental features. Both economic competitiveness and the environmental attractiveness of fusion will be strongly influenced by the materials constraints. A key aspect is the development of a compatible combination of materials for the various functions of structure, tritium breeding, coolant, neutron multiplication and other special requirements for a specific system. This paper presents an overview of key materials integration issues for high performance fusion power systems. Issues such as: chemical compatibility of structure and coolant, hydrogen/tritium interactions with the plasma facing/structure/breeder materials, thermomechanical constraints associated with coolant/structure, thermal-hydraulic requirements, and safety/environmental considerations from a systems viewpoint are presented. The major materials interactions for leading blanket concepts are discussed.

  19. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  20. High performance computing and communications: FY 1996 implementation plan

    SciTech Connect

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  1. Storage Area Networks and The High Performance Storage System

    SciTech Connect

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  2. Designing high-performance layered thermoelectric materials through orbital engineering

    PubMed Central

    Zhang, Jiawei; Song, Lirong; Madsen, Georg K. H.; Fischer, Karl F. F.; Zhang, Wenqing; Shi, Xun; Iversen, Bo B.

    2016-01-01

    Thermoelectric technology, which possesses potential application in recycling industrial waste heat as energy, calls for novel high-performance materials. The systematic exploration of novel thermoelectric materials with excellent electronic transport properties is severely hindered by limited insight into the underlying bonding orbitals of atomic structures. Here we propose a simple yet successful strategy to discover and design high-performance layered thermoelectric materials through minimizing the crystal field splitting energy of orbitals to realize high orbital degeneracy. The approach naturally leads to design maps for optimizing the thermoelectric power factor through forming solid solutions and biaxial strain. Using this approach, we predict a series of potential thermoelectric candidates from layered CaAl2Si2-type Zintl compounds. Several of them contain nontoxic, low-cost and earth-abundant elements. Moreover, the approach can be extended to several other non-cubic materials, thereby substantially accelerating the screening and design of new thermoelectric materials. PMID:26948043

  3. Designing high-performance layered thermoelectric materials through orbital engineering.

    PubMed

    Zhang, Jiawei; Song, Lirong; Madsen, Georg K H; Fischer, Karl F F; Zhang, Wenqing; Shi, Xun; Iversen, Bo B

    2016-01-01

    Thermoelectric technology, which possesses potential application in recycling industrial waste heat as energy, calls for novel high-performance materials. The systematic exploration of novel thermoelectric materials with excellent electronic transport properties is severely hindered by limited insight into the underlying bonding orbitals of atomic structures. Here we propose a simple yet successful strategy to discover and design high-performance layered thermoelectric materials through minimizing the crystal field splitting energy of orbitals to realize high orbital degeneracy. The approach naturally leads to design maps for optimizing the thermoelectric power factor through forming solid solutions and biaxial strain. Using this approach, we predict a series of potential thermoelectric candidates from layered CaAl2Si2-type Zintl compounds. Several of them contain nontoxic, low-cost and earth-abundant elements. Moreover, the approach can be extended to several other non-cubic materials, thereby substantially accelerating the screening and design of new thermoelectric materials. PMID:26948043

  4. Micromachined high-performance RF passives in CMOS substrate

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Ni, Zao; Gu, Lei; Wu, Zhengzheng; Yang, Chen

    2016-11-01

    This review systematically addresses the micromachining technologies used for the fabrication of high-performance radio-frequency (RF) passives that can be integrated into low-cost complementary metal-oxide semiconductor (CMOS)-grade (i.e. low-resistivity) silicon wafers. With the development of various kinds of post-CMOS-compatible microelectromechanical systems (MEMS) processes, 3D structural inductors/transformers, variable capacitors, tunable resonators and band-pass/low-pass filters can be compatibly integrated into active integrated circuits to form monolithic RF system-on-chips. By using MEMS processes, including substrate modifying/suspending and LIGA-like metal electroplating, both the highly lossy substrate effect and the resistive loss can be largely eliminated and depressed, thereby meeting the high-performance requirements of telecommunication applications.

  5. A statistical approach to electromigration design for high performance VLSI

    NASA Astrophysics Data System (ADS)

    Kitchin, John; Sriram, T. S.

    1998-01-01

    Statistical Electromigration Budgeting (J. Kitchin, 1995 Symposium on VLSI Circuits) or SEB is based on the concepts: (a) reliable design in VLSI means achieving a chip-level reliability goal and (b) electromigration degradation is inherently statistical in nature. The SEB methodology is reviewed along with results from recent high performance VLSI designs. Two SEB-based approaches for efficiently coupling metallization reliability statistics to design options are developed. Allowable-length-at-stress design rules communicate electromigration risk budget constraints to designers without the need for sophisticated CAD tools for chip-level interconnect analysis. Electromigration risk contours allow comparison of evolving metallization reliability statistics with design requirements having multiple frequency, temperature, and voltage options, a common need in high performance VLSI product development.

  6. The 10 Building Blocks of High-Performing Primary Care

    PubMed Central

    Bodenheimer, Thomas; Ghorob, Amireh; Willard-Grace, Rachel; Grumbach, Kevin

    2014-01-01

    Our experiences studying exemplar primary care practices, and our work assisting other practices to become more patient centered, led to a formulation of the essential elements of primary care, which we call the 10 building blocks of high-performing primary care. The building blocks include 4 foundational elements—engaged leadership, data-driven improvement, empanelment, and team-based care—that assist the implementation of the other 6 building blocks—patient-team partnership, population management, continuity of care, prompt access to care, comprehensiveness and care coordination, and a template of the future. The building blocks, which represent a synthesis of the innovative thinking that is transforming primary care in the United States, are both a description of existing high-performing practices and a model for improvement. PMID:24615313

  7. Designing high-performance layered thermoelectric materials through orbital engineering

    NASA Astrophysics Data System (ADS)

    Zhang, Jiawei; Song, Lirong; Madsen, Georg K. H.; Fischer, Karl F. F.; Zhang, Wenqing; Shi, Xun; Iversen, Bo B.

    2016-03-01

    Thermoelectric technology, which possesses potential application in recycling industrial waste heat as energy, calls for novel high-performance materials. The systematic exploration of novel thermoelectric materials with excellent electronic transport properties is severely hindered by limited insight into the underlying bonding orbitals of atomic structures. Here we propose a simple yet successful strategy to discover and design high-performance layered thermoelectric materials through minimizing the crystal field splitting energy of orbitals to realize high orbital degeneracy. The approach naturally leads to design maps for optimizing the thermoelectric power factor through forming solid solutions and biaxial strain. Using this approach, we predict a series of potential thermoelectric candidates from layered CaAl2Si2-type Zintl compounds. Several of them contain nontoxic, low-cost and earth-abundant elements. Moreover, the approach can be extended to several other non-cubic materials, thereby substantially accelerating the screening and design of new thermoelectric materials.

  8. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  9. Designing high-performance layered thermoelectric materials through orbital engineering.

    PubMed

    Zhang, Jiawei; Song, Lirong; Madsen, Georg K H; Fischer, Karl F F; Zhang, Wenqing; Shi, Xun; Iversen, Bo B

    2016-03-07

    Thermoelectric technology, which possesses potential application in recycling industrial waste heat as energy, calls for novel high-performance materials. The systematic exploration of novel thermoelectric materials with excellent electronic transport properties is severely hindered by limited insight into the underlying bonding orbitals of atomic structures. Here we propose a simple yet successful strategy to discover and design high-performance layered thermoelectric materials through minimizing the crystal field splitting energy of orbitals to realize high orbital degeneracy. The approach naturally leads to design maps for optimizing the thermoelectric power factor through forming solid solutions and biaxial strain. Using this approach, we predict a series of potential thermoelectric candidates from layered CaAl2Si2-type Zintl compounds. Several of them contain nontoxic, low-cost and earth-abundant elements. Moreover, the approach can be extended to several other non-cubic materials, thereby substantially accelerating the screening and design of new thermoelectric materials.

  10. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2006-05-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  11. Semiconducor wires and ribbons for high performance flexible electronics.

    SciTech Connect

    Sun, Y.; Baca, A. J.; Ahn, J.-H.; Meitl, M.; Menard, E.; Kim, H.-S; Choi, W.; Kim, D.-H; Huang, Y.; Rogers, J. A.; Center for Nanoscale Materials; Univ. of Illinois

    2008-01-01

    This article reviews the properties, fabrication and assembly of inorganic semiconductor materials that can be used as active building blocks to form high-performance transistors and circuits for flexible and bendable large-area electronics. Obtaining high performance on low temperature polymeric substrates represents a technical challenge for macroelectronics. Therefore, the fabrication of high quality inorganic materials in the form of wires, ribbons, membranes, sheets, and bars formed by bottom-up and top-down approaches, and the assembly strategies used to deposit these thin films onto plastic substrates will be emphasized. Substantial progress has been made in creating inorganic semiconducting materials that are stretchable and bendable, and the description of the mechanics of these form factors will be presented, including circuits in three-dimensional layouts. Finally, future directions and promising areas of research will be described.

  12. Real-time on-board airborne demonstration of high-speed on-board data processing for science instruments (HOPS)

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-05-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  13. Real-Time On-Board Airborne Demonstration of High-Speed On-Board Data Processing for Science Instruments (HOPS)

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  14. Inorganic nanostructured materials for high performance electrochemical supercapacitors.

    PubMed

    Liu, Sheng; Sun, Shouheng; You, Xiao-Zeng

    2014-02-21

    Electrochemical supercapacitors (ES) are a well-known energy storage system that has high power density, long life-cycle and fast charge-discharge kinetics. Nanostructured materials are a new generation of electrode materials with large surface area and short transport/diffusion path for ions and electrons to achieve high specific capacitance in ES. This mini review highlights recent developments of inorganic nanostructure materials, including carbon nanomaterials, metal oxide nanoparticles, and metal oxide nanowires/nanotubes, for high performance ES applications.

  15. Achieving High Performance on the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1998-01-01

    The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.

  16. High Performance Walls in Hot-Dry Climates

    SciTech Connect

    Hoeschele, M.; Springer, D.; Dakin, B.; German, A.

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist, and reducing the amount of wood penetrating the wall cavity.

  17. Intro - High Performance Computing for 2015 HPC Annual Report

    SciTech Connect

    Klitsner, Tom

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  18. Nuclear Forces and High-Performance Computing: The Perfect Match

    SciTech Connect

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  19. Progress Toward Demonstrating a High Performance Optical Tape Recording Technology

    NASA Technical Reports Server (NTRS)

    Oakley, W. S.

    1996-01-01

    This paper discusses the technology developments achieved during the first year of a program to develop a high performance digital optical tape recording device using a solid state, diode pumped, frequency doubled green laser source. The goal is to demonstrate, within two years, useful read/write data transfer rates to at least 100 megabytes per second and a user capacity of up to one terabyte per cartridge implemented in a system using a '3480' style mono-reel tape cartridge.

  20. High Performance Visualization using Query-Driven Visualizationand Analytics

    SciTech Connect

    Bethel, E. Wes; Campbell, Scott; Dart, Eli; Shalf, John; Stockinger, Kurt; Wu, Kesheng

    2006-06-15

    Query-driven visualization and analytics is a unique approach for high-performance visualization that offers new capabilities for knowledge discovery and hypothesis testing. The new capabilities akin to finding needles in haystacks are the result of combining technologies from the fields of scientific visualization and scientific data management. This approach is crucial for rapid data analysis and visualization in the petascale regime. This article describes how query-driven visualization is applied to a hero-sized network traffic analysis problem.

  1. Automated Fabrication Technologies for High Performance Polymer Composites

    NASA Technical Reports Server (NTRS)

    Shuart , M. J.; Johnston, N. J.; Dexter, H. B.; Marchello, J. M.; Grenoble, R. W.

    1998-01-01

    New fabrication technologies are being exploited for building high graphite-fiber-reinforced composite structure. Stitched fiber preforms and resin film infusion have been successfully demonstrated for large, composite wing structures. Other automatic processes being developed include automated placement of tacky, drapable epoxy towpreg, automated heated head placement of consolidated ribbon/tape, and vacuum-assisted resin transfer molding. These methods have the potential to yield low cost high performance structures by fabricating composite structures to net shape out-of-autoclave.

  2. Stability and control of maneuvering high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Berry, P. W.

    1977-01-01

    The stability and control of a high-performance aircraft was analyzed, and a design methodology for a departure prevention stability augmentation system (DPSAS) was developed. A general linear aircraft model was derived which includes maneuvering flight effects and trim calculation procedures for investigating highly dynamic trajectories. The stability and control analysis systematically explored the effects of flight condition and angular motion, as well as the stability of typical air combat trajectories. The effects of configuration variation also were examined.

  3. Low-Cost, High-Performance Hall Thruster Support System

    NASA Technical Reports Server (NTRS)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  4. High Performance Object-Oriented Scientific Programming in Fortran 90

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  5. Fabricating high performance lithium-ion batteries using bionanotechnology.

    PubMed

    Zhang, Xudong; Hou, Yukun; He, Wen; Yang, Guihua; Cui, Jingjie; Liu, Shikun; Song, Xin; Huang, Zhen

    2015-02-28

    Designing, fabricating, and integrating nanomaterials are key to transferring nanoscale science into applicable nanotechnology. Many nanomaterials including amorphous and crystal structures are synthesized via biomineralization in biological systems. Amongst various techniques, bionanotechnology is an effective strategy to manufacture a variety of sophisticated inorganic nanomaterials with precise control over their chemical composition, crystal structure, and shape by means of genetic engineering and natural bioassemblies. This provides opportunities to use renewable natural resources to develop high performance lithium-ion batteries (LIBs). For LIBs, reducing the sizes and dimensions of electrode materials can boost Li(+) ion and electron transfer in nanostructured electrodes. Recently, bionanotechnology has attracted great interest as a novel tool and approach, and a number of renewable biotemplate-based nanomaterials have been fabricated and used in LIBs. In this article, recent advances and mechanism studies in using bionanotechnology for high performance LIBs studies are thoroughly reviewed, covering two technical routes: (1) Designing and synthesizing composite cathodes, e.g. LiFePO4/C, Li3V2(PO4)3/C and LiMn2O4/C; and (2) designing and synthesizing composite anodes, e.g. NiO/C, Co3O4/C, MnO/C, α-Fe2O3 and nano-Si. This review will hopefully stimulate more extensive and insightful studies on using bionanotechnology for developing high-performance LIBs.

  6. Fabricating high performance lithium-ion batteries using bionanotechnology

    NASA Astrophysics Data System (ADS)

    Zhang, Xudong; Hou, Yukun; He, Wen; Yang, Guihua; Cui, Jingjie; Liu, Shikun; Song, Xin; Huang, Zhen

    2015-02-01

    Designing, fabricating, and integrating nanomaterials are key to transferring nanoscale science into applicable nanotechnology. Many nanomaterials including amorphous and crystal structures are synthesized via biomineralization in biological systems. Amongst various techniques, bionanotechnology is an effective strategy to manufacture a variety of sophisticated inorganic nanomaterials with precise control over their chemical composition, crystal structure, and shape by means of genetic engineering and natural bioassemblies. This provides opportunities to use renewable natural resources to develop high performance lithium-ion batteries (LIBs). For LIBs, reducing the sizes and dimensions of electrode materials can boost Li+ ion and electron transfer in nanostructured electrodes. Recently, bionanotechnology has attracted great interest as a novel tool and approach, and a number of renewable biotemplate-based nanomaterials have been fabricated and used in LIBs. In this article, recent advances and mechanism studies in using bionanotechnology for high performance LIBs studies are thoroughly reviewed, covering two technical routes: (1) Designing and synthesizing composite cathodes, e.g. LiFePO4/C, Li3V2(PO4)3/C and LiMn2O4/C; and (2) designing and synthesizing composite anodes, e.g. NiO/C, Co3O4/C, MnO/C, α-Fe2O3 and nano-Si. This review will hopefully stimulate more extensive and insightful studies on using bionanotechnology for developing high-performance LIBs.

  7. Fabricating high performance lithium-ion batteries using bionanotechnology.

    PubMed

    Zhang, Xudong; Hou, Yukun; He, Wen; Yang, Guihua; Cui, Jingjie; Liu, Shikun; Song, Xin; Huang, Zhen

    2015-02-28

    Designing, fabricating, and integrating nanomaterials are key to transferring nanoscale science into applicable nanotechnology. Many nanomaterials including amorphous and crystal structures are synthesized via biomineralization in biological systems. Amongst various techniques, bionanotechnology is an effective strategy to manufacture a variety of sophisticated inorganic nanomaterials with precise control over their chemical composition, crystal structure, and shape by means of genetic engineering and natural bioassemblies. This provides opportunities to use renewable natural resources to develop high performance lithium-ion batteries (LIBs). For LIBs, reducing the sizes and dimensions of electrode materials can boost Li(+) ion and electron transfer in nanostructured electrodes. Recently, bionanotechnology has attracted great interest as a novel tool and approach, and a number of renewable biotemplate-based nanomaterials have been fabricated and used in LIBs. In this article, recent advances and mechanism studies in using bionanotechnology for high performance LIBs studies are thoroughly reviewed, covering two technical routes: (1) Designing and synthesizing composite cathodes, e.g. LiFePO4/C, Li3V2(PO4)3/C and LiMn2O4/C; and (2) designing and synthesizing composite anodes, e.g. NiO/C, Co3O4/C, MnO/C, α-Fe2O3 and nano-Si. This review will hopefully stimulate more extensive and insightful studies on using bionanotechnology for developing high-performance LIBs. PMID:25640923

  8. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  9. High Performance Walls in Hot-Dry Climates

    SciTech Connect

    Hoeschele, Marc; Springer, David; Dakin, Bill; German, Alea

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of high performance wall systems. Builders were given incentives and design support in exchange for providing site access for construction observation, cost information, and builder survey feedback. Information from the project was designed to feed into the 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project utilized information collected in the California project.

  10. Integrating reconfigurable hardware-based grid for high performance computing.

    PubMed

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  11. Integrating reconfigurable hardware-based grid for high performance computing.

    PubMed

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  12. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  13. High-Performance Computing for Advanced Smart Grid Applications

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu

    2012-07-06

    The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

  14. Two Failures to Replicate High-Performance-Goal Priming Effects

    PubMed Central

    Harris, Christine R.; Coburn, Noriko; Rohrer, Doug; Pashler, Harold

    2013-01-01

    Bargh et al. (2001) reported two experiments in which people were exposed to words related to achievement (e.g., strive, attain) or to neutral words, and then performed a demanding cognitive task. Performance on the task was enhanced after exposure to the achievement related words. Bargh and colleagues concluded that better performance was due to the achievement words having activated a "high-performance goal". Because the paper has been cited well over 1100 times, an attempt to replicate its findings would seem warranted. Two direct replication attempts were performed. Results from the first experiment (n = 98) found no effect of priming, and the means were in the opposite direction from those reported by Bargh and colleagues. The second experiment followed up on the observation by Bargh et al. (2001) that high-performance-goal priming was enhanced by a 5-minute delay between priming and test. Adding such a delay, we still found no evidence for high-performance-goal priming (n = 66). These failures to replicate, along with other recent results, suggest that the literature on goal priming requires some skeptical scrutiny. PMID:23977304

  15. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  16. Robust high-performance control for robotic manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A robust control scheme to accomplish accurate trajectory tracking for an integrated system of manipulator-plus-actuators is proposed. The control scheme comprises a feedforward and a feedback controller. The feedforward controller contains any known part of the manipulator dynamics that can be used for online control. The feedback controller consists of adaptive position and velocity feedback gains and an auxiliary signal which is simply generated by a fixed-gain proportional/integral/derivative controller. The feedback controller is updated by very simple adaptation laws which contain both proportional and integral adaptation terms. By introduction of a simple sigma modification to the adaptation laws, robustness is guaranteed in the presence of unmodeled dynamics and disturbances.

  17. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  18. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Astrophysics Data System (ADS)

    McCullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-02-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  19. A ground-based memory state tracker for satellite on-board computer memory

    NASA Technical Reports Server (NTRS)

    Quan, Alan; Angelino, Robert; Hill, Michael; Schwuttke, Ursula; Hervias, Felipe

    1993-01-01

    The TOPEX/POSEIDON satellite, currently in Earth orbit, will use radar altimetry to measure sea surface height over 90 percent of the world's ice-free oceans. In combination with a precise determination of the spacecraft orbit, the altimetry data will provide maps of ocean topography, which will be used to calculate the speed and direction of ocean currents worldwide. NASA's Jet Propulsion Laboratory (JPL) has primary responsibility for mission operations for TOPEX/POSEIDON. Software applications have been developed to automate mission operations tasks. This paper describes one of these applications, the Memory State Tracker, which allows the ground analyst to examine and track the contents of satellite on-board computer memory quickly and efficiently, in a human-readable format, without having to receive the data directly from the spacecraft. This process is accomplished by maintaining a groundbased mirror-image of spacecraft On-board Computer memory.

  20. Monitoring snowpack properties by passive microwave sensors on board of aircraft and satellites

    NASA Technical Reports Server (NTRS)

    Chang, A. T. C.; Foster, J. L.; Hall, D. K.; Rango, A.

    1980-01-01

    Snowpack properties such as water equivalent and snow wetness may be inferred from variations in measured microwave brightness temperatures. This is because the emerged microwave radiation interacts directly with snow crystals within the snowpack. Using vertically and horizontally polarized brightness temperatures obtained from the multifrequency microwave radiometer (MFMR) on board a NASA research aircraft and the electrical scanning microwave radiometer (ESMR) and scanning multichannel microwave radiometer (SMMR) on board the Nimbus 5, 6, and 7 satellites, linear relationships between snow depth or water equivalent and microwave brightness temperature were developed. The presence of melt water in the snowpack generally increases the brightness temperatures, which can be used to predict snowpack priming and timing of runoff.

  1. Investigations of doses on board commercial passenger aircraft using CR-39 and thermoluminescent detectors.

    PubMed

    Horwacik, T; Bilski, P; Olko, P; Spurny, F; Turek, K

    2004-01-01

    Measurements of cosmic radiation dose rates (from the neutron and the non-neutron components) on board passenger aircraft were performed using environmental packages with thermoluminescent TL and CR-39 etched track detectors. The packages were calibrated at the CERN-EU high-energy Reference Field Facility and evaluated at the Institute of Nuclear Physics in Krakow (TL + CR-39) and at the German Aerospace Centre in Cologne (CR-39). Detector packages were exposed on board passenger aircraft operated by LOT Polish Airlines, flown between February and May 2001. The values of effective dose rate determined, averaged over the measuring period, ranged between 2.9 and 4.4 microSv h(-1). The results of environmental measurements agreed to within 10% with values calculated from the CARI-6 code.

  2. Comparison of cosmic rays radiation detectors on-board commercial jet aircraft.

    PubMed

    Kubančák, Ján; Ambrožová, Iva; Brabcová, Kateřina Pachnerová; Jakůbek, Jan; Kyselová, Dagmar; Ploc, Ondřej; Bemš, Július; Štěpán, Václav; Uchihori, Yukio

    2015-06-01

    Aircrew members and passengers are exposed to increased rates of cosmic radiation on-board commercial jet aircraft. The annual effective doses of crew members often exceed limits for public, thus it is recommended to monitor them. In general, the doses are estimated via various computer codes and in some countries also verified by measurements. This paper describes a comparison of three cosmic rays detectors, namely of the (a) HAWK Tissue Equivalent Proportional Counter; (b) Liulin semiconductor energy deposit spectrometer and (c) TIMEPIX silicon semiconductor pixel detector, exposed to radiation fields on-board commercial Czech Airlines company jet aircraft. Measurements were performed during passenger flights from Prague to Madrid, Oslo, Tbilisi, Yekaterinburg and Almaty, and back in July and August 2011. For all flights, energy deposit spectra and absorbed doses are presented. Measured absorbed dose and dose equivalent are compared with the EPCARD code calculations. Finally, the advantages and disadvantages of all detectors are discussed.

  3. An Alternative Lunar Ephemeris Model for On-Board Flight Software Use

    NASA Technical Reports Server (NTRS)

    Simpson, David G.

    1998-01-01

    In calculating the position vector of the Moon in on-board flight software, one often begins by using a series expansion to calculate the ecliptic latitude and longitude of the Moon, referred to the mean ecliptic and equinox of date. One then performs a reduction for precession, followed by a rotation of the position vector from the ecliptic plane to the equator, and a transformation from spherical to Cartesian coordinates before finally arriving at the desired result: equatorial J2000 Cartesian components of the lunar position vector. An alternative method is developed here in which the equatorial J2000 Cartesian components of the lunar position vector are calculated directly by a series expansion, saving valuable on-board computer resources.

  4. An on-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1982-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.

  5. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Mccullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-01-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  6. Comparison of cosmic rays radiation detectors on-board commercial jet aircraft.

    PubMed

    Kubančák, Ján; Ambrožová, Iva; Brabcová, Kateřina Pachnerová; Jakůbek, Jan; Kyselová, Dagmar; Ploc, Ondřej; Bemš, Július; Štěpán, Václav; Uchihori, Yukio

    2015-06-01

    Aircrew members and passengers are exposed to increased rates of cosmic radiation on-board commercial jet aircraft. The annual effective doses of crew members often exceed limits for public, thus it is recommended to monitor them. In general, the doses are estimated via various computer codes and in some countries also verified by measurements. This paper describes a comparison of three cosmic rays detectors, namely of the (a) HAWK Tissue Equivalent Proportional Counter; (b) Liulin semiconductor energy deposit spectrometer and (c) TIMEPIX silicon semiconductor pixel detector, exposed to radiation fields on-board commercial Czech Airlines company jet aircraft. Measurements were performed during passenger flights from Prague to Madrid, Oslo, Tbilisi, Yekaterinburg and Almaty, and back in July and August 2011. For all flights, energy deposit spectra and absorbed doses are presented. Measured absorbed dose and dose equivalent are compared with the EPCARD code calculations. Finally, the advantages and disadvantages of all detectors are discussed. PMID:25979739

  7. Scalability of a Base Level Design for an On-Board-Computer for Scientific Missions

    NASA Astrophysics Data System (ADS)

    Treudler, Carl Johann; Schroder, Jan-Carsten; Greif, Fabian; Stohlmann, Kai; Aydos, Gokce; Fey, Gorschwin

    2014-08-01

    Facing a wide range of mission requirements and the integration of diverse payloads requires extreme flexibility in the on-board-computing infrastructure for scientific missions. We show that scalability is principally difficult. We address this issue by proposing a base level design and show how the adoption to different needs is achieved. Inter-dependencies between scaling different aspects and their impact on different levels in the design are discussed.

  8. Optimization of the on-board linear generator in EMS-MAGLEV trains

    SciTech Connect

    Andriollo, M.; Martinelli, G.; Morini, A.; Tortella, A.

    1997-09-01

    The paper presents a fully automated procedure to optimize the performance of the on-board generator used in electromagnetic Maglev trains. The procedure utilizes FEM analyses to determine the mathematical model of the generator and then calculates the generator output characteristics by means of step-by-step numerical simulations. On the basis of these characteristics, a suitable objective function is defined. The function is minimized, iteratively changing the geometrical configuration until a stop criterion is satisfied.

  9. High-Speed On-Board Data Processing for Science Instruments

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace

    2014-01-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.

  10. Spacecraft drag-free technology development: On-board estimation and control synthesis

    NASA Technical Reports Server (NTRS)

    Key, R. W.; Mettler, E.; Milman, M. H.; Schaechter, D. B.

    1982-01-01

    Estimation and control methods for a Drag-Free spacecraft are discussed. The functional and analytical synthesis of on-board estimators and controllers for an integrated attitude and translation control system is represented. The framework for detail definition and design of the baseline drag-free system is created. The techniques for solution of self-gravity and electrostatic charging problems are applicable generally, as is the control system development.

  11. High-Speed on-Board Data Processing for Science Instruments

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Lin, B.

    2014-12-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented. The project is called High-Speed OnBoard Data Processing for Science Instruments (HOPS) and focuses on an air/space-borne high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this presentation.

  12. High-speed on-board data processing for science instruments

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace

    2014-06-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.

  13. On-board computational efficiency in real time UAV embedded terrain reconstruction

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis

    2014-05-01

    In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in

  14. An Intelligent System for Monitoring the Microgravity Environment Quality On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Lin, Paul P.; Jules, Kenol

    2002-01-01

    An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.

  15. New methods for microbial contamination monitoring: an experiment on board the MIR orbital station

    NASA Astrophysics Data System (ADS)

    Guarnieri, V.; Gaia, E.; Battocchio, L.; Pitzurra, M.; Savino, A.; Pasquarella, C.; Vago, T.; Cotronei, V.

    1997-01-01

    Experiment T2, carried out during the Euromir'95 mission, was an important step toward innovative methods for spacecraft microbial contamination monitoring. A new standard sampling technique permitted samples to be analysed by different means. On board, two analysis methods were tested in parallel: Bioluminescence and Miniculture. In turn, downloaded samples are being analysed by polymerase chain reaction (PCR), a powerful and promising method for the rapid detection, identification and quantification of pathogens and biofouling agents in closed manned habitats.

  16. STS-33 MS Carter and MS Thornton display 'Maggot on Board' sign and candy

    NASA Technical Reports Server (NTRS)

    1989-01-01

    STS-33 Mission Specialist (MS) Manley L. Carter, Jr (left) and MS Kathryn C. Thornton display 'Maggot on Board' sign and 'SMARTIES' candy stored in plastic bag on the aft flight deck of Discovery, Orbiter Vehicle (OV) 103. The mission specialists are wearing their mission polo shirts and communications kit assembly headsets. An overhead window appears above their heads. A gold necklace chain floats around Carter's neck.

  17. Five Years Lidar Research on Board the Facility for Airborne Atmospheric Measurements (FAAM)

    NASA Astrophysics Data System (ADS)

    Marenco, Franco

    2016-06-01

    I will present a summary of the results obtained with the backscatter lidar on-board the FAAM research aircraft. This simple instrument has been used in several campaigns, and has contributed successfully to the characterization of volcanic ash, mineral dust, biomass burning aerosols, clouds, and the boundary layer structure. Its datasets have been used in many applications, from numerical weather predictions to the validation of satellite remote sensing.

  18. On implementing MPI-IO portably and with high performance.

    SciTech Connect

    Thakur, R.; Gropp, W.; Lusk, E.

    1998-11-30

    We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high performance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I/O functions (open, seek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and performance. We instead advocate an implementation approach that combines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I/O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, noncontiguous accesses, collective I/O, asynchronous I/O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, file preallocation, and some miscellaneous features. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be designed to better support MPI-IO. We provide a list of features desired from a file system that would help in implementing MPI-IO correctly and with high performance.

  19. Understanding and Improving High-Performance I/O Subsystems

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James

    1996-01-01

    This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.

  20. High performance APCS conceptual design and evaluation scoping study

    SciTech Connect

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  1. Noise and sleep on board vessels in the Royal Norwegian Navy.

    PubMed

    Sunde, Erlend; Bratveit, Magne; Pallesen, Stale; Moen, Bente Elisabeth

    2016-01-01

    Previous research indicates that exposure to noise during sleep can cause sleep disturbance. Seamen on board vessels are frequently exposed to noise also during sleep periods, and studies have reported sleep disturbance in this occupational group. However, studies of noise and sleep in maritime settings are few. This study's aim was to examine the associations between noise exposure during sleep, and sleep variables derived from actigraphy among seamen on board vessels in the Royal Norwegian Navy (RNoN). Data were collected on board 21 RNoN vessels, where navy seamen participated by wearing an actiwatch (actigraph), and by completing a questionnaire comprising information on gender, age, coffee drinking, nicotine use, use of medication, and workload. Noise dose meters were used to assess noise exposure inside the seamen's cabin during sleep. Eighty-three sleep periods from 68 seamen were included in the statistical analysis. Linear mixed-effects models were used to examine the association between noise exposure and the sleep variables percentage mobility during sleep and sleep efficiency, respectively. Noise exposure variables, coffee drinking status, nicotine use status, and sleeping hours explained 24.9% of the total variance in percentage mobility during sleep, and noise exposure variables explained 12.0% of the total variance in sleep efficiency. Equivalent noise level and number of noise events per hour were both associated with increased percentage mobility during sleep, and the number of noise events was associated with decreased sleep efficiency.

  2. Stationary and on-board storage systems to enhance energy and cost efficiency of tramways

    NASA Astrophysics Data System (ADS)

    Ceraolo, M.; Lutzemberger, G.

    2014-10-01

    Nowadays road transportation contributes in a large amount to the urban pollution and greenhouse gas emissions. One solution in urban environment, also in order to mitigate the effects of traffic jams, is the use of tramways. The most important bonus comes from the inherent reversibility of electric drives: energy can be sent back to the electricity source, while braking the vehicle. This can be done installing some storage device on-board trains, or in one or more points of the supply network. This paper analyses and compares the following variants: Stationary high-power lithium batteries. Stationary supercapacitors. High-power lithium batteries on-board trains. Supercapacitors on-board trains. When the storage system is constituted by a supercapacitor stack, it is mandatory to interpose between it and the line a DC/DC converter. On the contrary, the presence of the converter can be avoided, in case of lithium battery pack. This paper will make an evaluation of all these configurations, in a realistic case study, together with a cost/benefit analysis.

  3. Preliminary on-orbit performance of the Thermal Infrared Sensor (TIRS) on board Landsat 8

    NASA Astrophysics Data System (ADS)

    Montanaro, Matthew; Tesfaye, Zelalem; Lunsford, Allen; Wenny, Brian; Reuter, Dennis; Markham, Brian; Smith, Ramsey; Thome, Kurtis

    2013-09-01

    The Thermal Infrared Sensor (TIRS) on board Landsat 8 continues thermal band measurements of the Earth for the Landsat program. TIRS improves on previous Landsat designs by making use of a pushbroom sensor layout to collect data from the Earth in two spectral channels. The radiometric performance requirements of each detector were set to ensure the proper radiometric integrity of the instrument. The performance of TIRS was characterized during pre-flight thermal-vacuum testing. Calibration methods and algorithms were developed to translate the raw signal from the detectors into an accurate at-aperture spectral radiance. The TIRS instrument has the ability to view an on-board variable-temperature blackbody and a deep space view port for calibration purposes while operating on-orbit. After TIRS was successfully activated on-orbit, checks were performed on the instrument data to determine its image quality. These checkouts included an assessment of the on-board blackbody and deep space views as well as normal Earth scene collects. The calibration parameters that were determined pre-launch were updated by utilizing data from these preliminary on-orbit assessments. The TIRS on-orbit radiometric performance was then characterized using the updated calibration parameters. Although the characterization of the instrument is continually assessed over the lifetime of the mission, the preliminary results indicate that TIRS is meeting the noise and stability requirements while the pixel-to-pixel uniformity performance and the absolute radiometric performance require further study.

  4. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  5. Risk Mitigation for the Development of the New Ariane 5 On-Board Computer

    NASA Astrophysics Data System (ADS)

    Stransky, Arnaud; Chevalier, Laurent; Dubuc, Francois; Conde-Reis, Alain; Ledoux, Alain; Miramont, Philippe; Johansson, Leif

    2010-08-01

    In the frame of the Ariane 5 production, some equipment will become obsolete and need to be redesigned and redeveloped. This is the case for the On-Board Computer, which has to be completely redesigned and re-qualified by RUAG Space, as well as all its on-board software and associated development tools by ASTRIUM ST. This paper presents this obsolescence treatment, which has started in 2007 under an ESA contract, in the frame of ACEP and ARTA accompaniment programmes, and is very critical in technical term but also from schedule point of view: it gives the context and overall development plan, and details the risk mitigation actions agreed with ESA, especially those related to the development of the input/output ASIC, and also the on-board software porting and revalidation strategy. The efficiency of these risk mitigation actions has been proven by the outcome schedule; this development constitutes an up-to-date case for good practices, including some experience report and feedback for future other developments.

  6. [Flight and altitude medicine for anesthetists-part 3: emergencies on board commercial aircraft].

    PubMed

    Graf, Jürgen; Stüben, Uwe; Pump, Stefan

    2013-04-01

    The demographic trend of industrialized societies is also reflected in commercial airlines' passengers: passengers are older nowadays and long-haul flights are routine mode of transport despite considerable chronic and acute medical conditions. Moreover, duration of non-stop flight routes and the number of passengers on board increase. Thus, the probability of a medical incident during a particular flight event increases, too.Due to international regulations minimum standards for medical equipment on board, and first aid training of the crews are set. However, it is often difficult to assess whether a stopover at a nearby airport can improve the medical care of a critically ill passenger. Besides flight operations and technical aspects, the medical infrastructure on the ground has to be considered carefully.Regardless of the amount of experience of a physician medical emergencies on board an aircraft usually represent a particular challenge. This is mainly due to the unfamiliar surroundings, the characteristics of the cabin atmosphere, the often existing cultural and language barriers and legal liability concerns.

  7. A survey on electromagnetic interferences on aircraft avionics systems and a GSM on board system overview

    NASA Astrophysics Data System (ADS)

    Vinto, Natale; Tropea, Mauro; Fazio, Peppino; Voznak, Miroslav

    2014-05-01

    Recent years have been characterized by an increase in the air traffic. More attention over micro-economic and macroeconomic indexes would be strategic to gather and enhance the safety of a flight and customer needing, for communicating by wireless handhelds on-board aircrafts. Thus, European Telecommunications Standards Institute (ETSI) proposed a GSM On Board (GSMOBA) system as a possible solution, allowing mobile terminals to communicate through GSM system on aircraft, avoiding electromagnetic interferences with radio components aboard. The main issues are directly related with interferences that could spring-out when mobile terminals attempt to connect to ground BTS, from the airplane. This kind of system is able to resolve the problem in terms of conformance of Effective Isotropic Radiated Power (EIRP) limits, defined outside the aircraft, by using an On board BTS (OBTS) and modeling the relevant key RF parameters on the air. The main purpose of this work is to illustrate the state-of-the-art of literature and previous studies about the problem, giving also a good detail of technical and normative references.

  8. Noise and sleep on board vessels in the Royal Norwegian Navy

    PubMed Central

    Sunde, Erlend; Bråtveit, Magne; Pallesen, Ståle; Moen, Bente Elisabeth

    2016-01-01

    Previous research indicates that exposure to noise during sleep can cause sleep disturbance. Seamen on board vessels are frequently exposed to noise also during sleep periods, and studies have reported sleep disturbance in this occupational group. However, studies of noise and sleep in maritime settings are few. This study's aim was to examine the associations between noise exposure during sleep, and sleep variables derived from actigraphy among seamen on board vessels in the Royal Norwegian Navy (RNoN). Data were collected on board 21 RNoN vessels, where navy seamen participated by wearing an actiwatch (actigraph), and by completing a questionnaire comprising information on gender, age, coffee drinking, nicotine use, use of medication, and workload. Noise dose meters were used to assess noise exposure inside the seamen's cabin during sleep. Eighty-three sleep periods from 68 seamen were included in the statistical analysis. Linear mixed-effects models were used to examine the association between noise exposure and the sleep variables percentage mobility during sleep and sleep efficiency, respectively. Noise exposure variables, coffee drinking status, nicotine use status, and sleeping hours explained 24.9% of the total variance in percentage mobility during sleep, and noise exposure variables explained 12.0% of the total variance in sleep efficiency. Equivalent noise level and number of noise events per hour were both associated with increased percentage mobility during sleep, and the number of noise events was associated with decreased sleep efficiency. PMID:26960785

  9. University of the seas, 15 years of oceanographic schools on board of the Marion Dufresne

    NASA Astrophysics Data System (ADS)

    Malaize, Bruno; Deverchere, Jacques; Leau, Hélène; Graindorge, David

    2015-04-01

    Since the first University at Sea, proposed by two French Universities (Brest and Bordeaux) in 1999, the R/V Marion Dufresne, in collaboration with the French Polar institute (IPEV), has welcome 12 oceanographic schools. The main objective of this educational and scientific program is to stimulate the potential interest of highly graduated students in scientific fields dealing with oceanography, and to broaden exchanges with foreign universities, strengthening a pool of excellence at a high international scientific level. It is a unique opportunity for the students to discover and to be involved in the work in progress of collecting scientific data on board of a ship, and to attend international research courses given by scientists involved in the cruise program. They also experience the final task of the scientific work by presenting their own training results, making posters on board, and writing a cruise report. For some University at Sea, students had also updated a daily journal, available on internet, hosted by the main institutions involved (as IPEV or EPOC, Bordeaux University). All this work is done in English, a common language to all the participants. An overview of these 15 years background experience will be presented, underlying the financial supports used, the logistic on board, as well as all the benefits acquiered by all former students, now in permanent positions in different international institutions.

  10. Validation of On-board Cloud Cover Assessment Using EO-1

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Miller, Jerry; Griffin, Michael; Burke, Hsiao-hua

    2003-01-01

    The purpose of this NASA Earth Science Technology Office funded effort was to flight validate an on-board cloud detection algorithm and to determine the performance that can be achieved with a Mongoose V flight computer. This validation was performed on the EO-1 satellite, which is operational, by uploading new flight code to perform the cloud detection. The algorithm was developed by MIT/Lincoln Lab and is based on the use of the Hyperion hyperspectral instrument using selected spectral bands from 0.4 to 2.5 microns. The Technology Readiness Level (TRL) of this technology at the beginning of the task was level 5 and was TRL 6 upon completion. In the final validation, an 8 second (0.75 Gbytes) Hyperion image was processed on-board and assessed for percentage cloud cover within 30 minutes. It was expected to take many hours and perhaps a day considering that the Mongoose V is only a 6-8 MIP machine in performance. To accomplish this test, the image taken had to have level 0 and level 1 processing performed on-board before the cloud algorithm was applied. For almost all of the ground test cases and all of the flight cases, the cloud assessment was within 5% of the correct value and in most cases within 1-2%.

  11. [Determination of dyes in cosmetic by high performance liquid chromatography].

    PubMed

    Wang, J; Hu, J

    1999-09-01

    A reversed-phase high performance liquid chromatographic method was used to detect 5 dyes including p-phenylenediamine in cosmetrics. A Zorbax C8 column was used and mobile phase was V (triethanolamine):V(water):V(acetonitrile) = 0.95:94.05:5 adjusted to pH 7.7 by phosphoric acid. The wavelength 280 nm was selected. Samples were extracted with 95% ethanol by ultrasonic method. The recoveries were 87%-107% and the relative standard deviations were 2.3%-6.4%.

  12. A high-performance wave guide cryogenic thermal break

    NASA Astrophysics Data System (ADS)

    Melhuish, S. J.; McCulloch, M. A.; Piccirillo, L.; Stott, C.

    2016-10-01

    We describe a high-performance wave guide cryogenic thermal break. This has been constructed both for Ka band, using WR28 wave guide, and Q band, using WR22 wave guide. The mechanical structure consists of a hexapod (Stewart platform) made from pultruded carbon fibre tubing. We present a tentative examination of the cryogenic Young's modulus of this material. The thermal conductivity is measured at temperatures above the range explored by Runyan and Jones, resulting in predicted conductive loads through our thermal breaks of 3.7 mW to 3 K and 17 μK to 1 K.

  13. High-performance computing in structural mechanics and engineering

    SciTech Connect

    Adeli, H.; Kamat, M.P.; Kulkarni, G.; Vanluchene, R.D. Georgia Inst. of Technology, Atlanta Montana State Univ., Bozeman )

    1993-07-01

    Recent advances in computer hardware and software have made multiprocessing a viable and attractive technology. This paper reviews high-performance computing methods in structural mechanics and engineering through the use of a new generation of multiprocessor computers. The paper presents an overview of vector pipelining, performance metrics for parallel and vector computers, programming languages, and general programming considerations. Recent developments in the application of concurrent processing techniques to the solution of structural mechanics and engineering problems are reviewed, with special emphasis on linear structural analysis, nonlinear structural analysis, transient structural analysis, dynamics of multibody flexible systems, and structural optimization. 64 refs.

  14. Language interoperability mechanisms for high-performance scientific applications

    SciTech Connect

    Cleary, A; Kohn, S; Smith, S G; Smolinski, B

    1998-09-18

    Language interoperability is a difficult problem facing the developers and users of large numerical software packages. Language choices often hamper the reuse and sharing of numerical libraries, especially in a scientific computing environment that uses a breadth of programming languages, including C, c ++, Java, various Fortran dialects, and scripting languages such as Python. In this paper, we propose a new approach to langauge interoparability for high-performance scientific applications based on Interface Definition Language (IDL) techniques. We investigate the modifications necessary to adopt traditional IDL approaches for use by the scientific community, including IDL extensions for numerical computing and issues involved in mapping IDLs to Fortran 77 and Fortran 90.

  15. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  16. High performance cosmological simulations on a grid of supercomputers

    NASA Astrophysics Data System (ADS)

    Groen, D.; Rieder, S.; Portegies Zwart, S. F.

    2012-06-01

    We present results from our cosmological N-body simulation which consisted of 2048x2048x2048 particles and ran distributed across three supercomputers throughout Europe. The run, which was performed as the concluding phase of the Gravitational Billion Body Problem DEISA project, integrated a 30 Mpc box of dark matter using an optimized Tree/Particle Mesh N-body integrator. We ran the simulation up to the present day (z=0), and obtained an efficiency of about 0.93 over 2048 cores compared to a single supercomputer run. In addition, we share our experiences on using multiple supercomputers for high performance computing and provide several recommendations for future projects.

  17. Trends in high-performance computing for engineering calculations.

    PubMed

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers.

  18. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  19. IBM SP high-performance networking with a GRF.

    SciTech Connect

    Navarro, J.P.

    1999-05-27

    Increasing use of highly distributed applications, demand for faster data exchange, and highly parallel applications can push the limits of conventional external networking for IBM SP sites. In technical computing applications we have observed a growing use of a pipeline of hosts and networks collaborating to collect, process, and visualize large amounts of realtime data. The GRF, a high-performance IP switch from Ascend and IBM, is the first backbone network switch to offer a media card that can directly connect to an SP Switch. This enables switch attached hosts in an SP complex to communicate at near SP Switch speeds with other GRF attached hosts and networks.

  20. Advanced Modified High Performance Synthetic Jet Actuator with Curved Chamber

    NASA Technical Reports Server (NTRS)

    Xu, Tian-Bing (Inventor); Su, Ji (Inventor); Jiang, Xiaoning (Inventor)

    2014-01-01

    The advanced modified high performance synthetic jet actuator with optimized curvature shape chamber (ASJA-M) is a synthetic jet actuator (SJA) with a lower volume reservoir or chamber. A curved chamber is used, instead of the conventional cylinder chamber, to reduce the dead volume of the jet chamber and increase the efficiency of the synthetic jet actuator. The shape of the curvature corresponds to the maximum displacement (deformation) profile of the electroactive diaphragm. The jet velocity and mass flow rate for the ASJA-M will be several times higher than conventional piezoelectric actuators.