Science.gov

Sample records for adaptable high-performance on-board

  1. High-performance thresholding with adaptive equalization

    NASA Astrophysics Data System (ADS)

    Lam, Ka Po

    1998-09-01

    The ability to simplify an image whilst retaining such crucial information as shapes and geometric structures is of great importance for real-time image analysis applications. Here the technique of binary thresholding which reduces the image complexity has generally been regarded as one of the most valuable methods, primarily owing to its ease of design and analysis. This paper studies the state of developments in the field, and describes a radically different approach of adaptive thresholding. The latter employs the analytical technique of histogram normalization for facilitating an optimal `contrast level' of the image under consideration. A suitable criterion is also developed to determine the applicability of the adaptive processing procedure. In terms of performance and computational complexity, the proposed algorithm compares favorably to five established image thresholding methods selected for this study. Experimental results have shown that the new algorithm outperforms these methods in terms of a number of important errors measures, including a consistently low visual classification error performance. The simplicity of design of the algorithm also lends itself to efficient parallel implementations.

  2. Digital control of high performance aircraft using adaptive estimation techniques

    NASA Technical Reports Server (NTRS)

    Van Landingham, H. F.; Moose, R. L.

    1977-01-01

    In this paper, an adaptive signal processing algorithm is joined with gain-scheduling for controlling the dynamics of high performance aircraft. A technique is presented for a reduced-order model (the longitudinal dynamics) of a high performance STOL aircraft. The actual controller views the nonlinear behavior of the aircraft as equivalent to a randomly switching sequence of linear models taken from a preliminary piecewise-linear fit of the system nonlinearities. The adaptive nature of the estimator is necessary to select the proper sequence of linear models along the flight trajectory. Nonlinear behavior is approximated by effective switching of the linear models at random times, with durations reflecting aircraft motion in response to pilot commands.

  3. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  4. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  5. On-board multispectral classification study. Volume 2: Supplementary tasks. [adaptive control

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The operational tasks of the onboard multispectral classification study were defined. These tasks include: sensing characteristics for future space applications; information adaptive systems architectural approaches; data set selection criteria; and onboard functional requirements for interfacing with global positioning satellites.

  6. Adaptive filtering of biodynamic stick feedthrough in manipulation tasks on board moving platforms

    NASA Technical Reports Server (NTRS)

    Velger, M.; Grunwald, A.; Merhav, S.

    1986-01-01

    A novel approach to suppress the effects of biodynamic interference is presented. An adaptive noise canceling technique is employed for substracting the platform motion correlated components from the control stick output. The effects of biodynamic interference and its suppression by adaptive noise cancellation has been evaluated in a series of tracking tasks performed in a moving base simulator. Simulator motions were in pitch, roll and combined pitch and roll. Human operator performance was assessed from the mean square values of the tracking error and the control activity. The tracking error and the total stick output signal were found to increase significantly with motion and to diminish substantially with adaptive noise cancellation, thus providing a considerable improvement in tracking performance under conditions in which platform motion were present. The adaptive filter was found to cause a significant increase in the cross-over frequency and decrease in the phase margin. Moreover, the adaptive filter was found to significantly improve the human operator visual motor response. This improvement is manifested as an increased human operator gain, a smaller time delay and lower pilot workload.

  7. Intelligent adaptive nonlinear flight control for a high performance aircraft with neural networks.

    PubMed

    Savran, Aydogan; Tasaltin, Ramazan; Becerikli, Yasar

    2006-04-01

    This paper describes the development of a neural network (NN) based adaptive flight control system for a high performance aircraft. The main contribution of this work is that the proposed control system is able to compensate the system uncertainties, adapt to the changes in flight conditions, and accommodate the system failures. The underlying study can be considered in two phases. The objective of the first phase is to model the dynamic behavior of a nonlinear F-16 model using NNs. Therefore a NN-based adaptive identification model is developed for three angular rates of the aircraft. An on-line training procedure is developed to adapt the changes in the system dynamics and improve the identification accuracy. In this procedure, a first-in first-out stack is used to store a certain history of the input-output data. The training is performed over the whole data in the stack at every stage. To speed up the convergence rate and enhance the accuracy for achieving the on-line learning, the Levenberg-Marquardt optimization method with a trust region approach is adapted to train the NNs. The objective of the second phase is to develop intelligent flight controllers. A NN-based adaptive PID control scheme that is composed of an emulator NN, an estimator NN, and a discrete time PID controller is developed. The emulator NN is used to calculate the system Jacobian required to train the estimator NN. The estimator NN, which is trained on-line by propagating the output error through the emulator, is used to adjust the PID gains. The NN-based adaptive PID control system is applied to control three angular rates of the nonlinear F-16 model. The body-axis pitch, roll, and yaw rates are fed back via the PID controllers to the elevator, aileron, and rudder actuators, respectively. The resulting control system has learning, adaptation, and fault-tolerant abilities. It avoids the storage and interpolation requirements for the too many controller parameters of a typical flight control

  8. A multi-layer robust adaptive fault tolerant control system for high performance aircraft

    NASA Astrophysics Data System (ADS)

    Huo, Ying

    Modern high-performance aircraft demand advanced fault-tolerant flight control strategies. Not only the control effector failures, but the aerodynamic type failures like wing-body damages often result in substantially deteriorate performance because of low available redundancy. As a result the remaining control actuators may yield substantially lower maneuvering capabilities which do not authorize the accomplishment of the air-craft's original specified mission. The problem is to solve the control reconfiguration on available control redundancies when the mission modification is urged to save the aircraft. The proposed robust adaptive fault-tolerant control (RAFTC) system consists of a multi-layer reconfigurable flight controller architecture. It contains three layers accounting for different types and levels of failures including sensor, actuator, and fuselage damages. In case of the nominal operation with possible minor failure(s) a standard adaptive controller stands to achieve the control allocation. This is referred to as the first layer, the controller layer. The performance adjustment is accounted for in the second layer, the reference layer, whose role is to adjust the reference model in the controller design with a degraded transit performance. The upmost mission adjust is in the third layer, the mission layer, when the original mission is not feasible with greatly restricted control capabilities. The modified mission is achieved through the optimization of the command signal which guarantees the boundedness of the closed-loop signals. The main distinguishing feature of this layer is the the mission decision property based on the current available resources. The contribution of the research is the multi-layer fault-tolerant architecture that can address the complete failure scenarios and their accommodations in realities. Moreover, the emphasis is on the mission design capabilities which may guarantee the stability of the aircraft with restricted post

  9. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

    NASA Astrophysics Data System (ADS)

    Karizi, Nasim

    An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

  10. Design and Performance Optimization of GeoFEST for Adaptive Geophysical Modeling on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Norton, C. D.; Parker, J. W.; Lyzenga, G. A.; Glasscoe, M. T.; Donnellan, A.

    2006-12-01

    The Geophysical Finite Element Simulation Tool (GeoFEST) and the PYRAMID parallel adaptive mesh refinement library have been integrated to provide high performance and high resolution modeling of 3D Earth crustal deformation under tectonic loading associated with the Earthquake cycle. This includes co-seismic and post-seismic modeling capabilities as well as other problems of geophysical interest. The use of the PYRAMID AMR library has allowed simulations of tens of millions of elements on various parallel computers where strain energy is applied as the error estimation criterion. This has allowed for improved generation of time-dependent simulations where the computational effort can be localized to geophysical regions of most activity. This talk will address techniques including conversion of the sequential GeoFEST software to a parallel version using PYRAMID, performance optimization and various lessons learned as part of porting such software to various parallel systems including Linux Clusters, SGI Altix systems, and Apple G5 XServe systems. We will also describe how the software has been applied in modeling of post-seismic deformation studies of the Landers and Northridge earthquake events.

  11. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  12. Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas

    2001-01-01

    Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.

  13. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  14. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  15. DARTS: a low-cost high-performance FPGA implemented real-time control platform for adaptive optics

    NASA Astrophysics Data System (ADS)

    Goodsell, S. J.; Dipper, N. A.; Geng, D.; Myers, R. M.; Saunter, C. D.

    2005-08-01

    Durham University's Centre for Advanced Instrumentation (CfAI) are currently producing a generic high-performance low-cost real-time control system (RTCS) for adaptive optics (AO) based on Field Programmable Gate Array (FPGA) technology. This platform, labelled DARTS, 'Durham Adaptive optics Real Time System', will primarily be used as the controller for Durham's enhanced Rayleigh Technical Demonstrator (RTD) system. However, DARTS could be used as a low latency control system for existing AO instruments or could be used for future 'budget' AO Natural Guide Star (NGS) and/or Laser Guide Star (LGS) RTCS. DARTS uses an FPGA device to host an end-to-end modular real-time AO pipeline connected to a Wishbone control bus. The FPGA takes advantage of the pipeline's highly parallel computationally intensive tasks which usually are calculated in series by a system processor. DARTS hopes to increase the obtainable control loop frequency and reduce the computational latency of the RTD's RTCS. DARTS is capable of high bandwidth I/O due to the implementation of the serial Front Panel Data Port (sFPDP) industrial protocol. The hardware's I/O design is modular, allowing for the future connection of various WFSs and DMs via signal converters. Various communications architectures are suggested to allow non real-time configuration and visualisation data to flow between the wishbone control bus and a processing device, either externally or internally to the FPGA device. This paper reveals the current status of the project.

  16. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  17. DSP-based adaptive backstepping using the tracking errors for high-performance sensorless speed control of induction motor drive.

    PubMed

    Zaafouri, Abderrahmen; Ben Regaya, Chiheb; Ben Azza, Hechmi; Châari, Abdelkader

    2016-01-01

    This paper presents a modified structure of the backstepping nonlinear control of the induction motor (IM) fitted with an adaptive backstepping speed observer. The control design is based on the backstepping technique complemented by the introduction of integral tracking errors action to improve its robustness. Unlike other research performed on backstepping control with integral action, the control law developed in this paper does not propose the increase of the number of system state so as not increase the complexity of differential equations resolution. The digital simulation and experimental results show the effectiveness of the proposed control compared to the conventional PI control. The results analysis shows the characteristic robustness of the adaptive control to disturbances of the load, the speed variation and low speed.

  18. High-performance liquid chromatography with diode-array detection cotinine method adapted for the assessment of tobacco smoke exposure.

    PubMed

    Bartolomé, Mónica; Gallego-Picó, Alejandrina; Huetos, Olga; Castaño, Argelia

    2014-06-01

    Smoking is considered to be one of the main risk factors for cancer and other diseases and is the second leading cause of death worldwide. As the anti-tobacco legislation implemented in Europe has reduced secondhand smoke exposure levels, analytical methods must be adapted to these new levels. Recent research has demonstrated that cotinine is the best overall discriminator when biomarkers are used to determine whether a person has ongoing exposure to tobacco smoke. This work proposes a sensitive, simple and low-cost method based on solid-phase extraction and liquid chromatography with diode array detection for the assessment of tobacco smoke exposure by cotinine determination in urine. The analytical procedure is simple and fast (20 min) when compared to other similar methods existing in the literature, and it is cheaper than the mass spectrometry techniques usually used to quantify levels in nonsmokers. We obtained a quantification limit of 12.30 μg/L and a recovery of over 90%. The linearity ranges used were 12-250 and 250-4000 μg/L. The method was successfully used to determine cotinine in urine samples collected from different volunteers and is clearly an alternative routine method that allows active and passive smokers to be distinguished.

  19. A High Performance, Cost-Effective, Open-Source Microscope for Scanning Two-Photon Microscopy that Is Modular and Readily Adaptable

    PubMed Central

    Rosenegger, David G.; Tran, Cam Ha T.; LeDue, Jeffery; Zhou, Ning; Gordon, Grant R.

    2014-01-01

    Two-photon laser scanning microscopy has revolutionized the ability to delineate cellular and physiological function in acutely isolated tissue and in vivo. However, there exist barriers for many laboratories to acquire two-photon microscopes. Additionally, if owned, typical systems are difficult to modify to rapidly evolving methodologies. A potential solution to these problems is to enable scientists to build their own high-performance and adaptable system by overcoming a resource insufficiency. Here we present a detailed hardware resource and protocol for building an upright, highly modular and adaptable two-photon laser scanning fluorescence microscope that can be used for in vitro or in vivo applications. The microscope is comprised of high-end componentry on a skeleton of off-the-shelf compatible opto-mechanical parts. The dedicated design enabled imaging depths close to 1 mm into mouse brain tissue and a signal-to-noise ratio that exceeded all commercial two-photon systems tested. In addition to a detailed parts list, instructions for assembly, testing and troubleshooting, our plan includes complete three dimensional computer models that greatly reduce the knowledge base required for the non-expert user. This open-source resource lowers barriers in order to equip more laboratories with high-performance two-photon imaging and to help progress our understanding of the cellular and physiological function of living systems. PMID:25333934

  20. Wind shear measuring on board an airliner

    NASA Technical Reports Server (NTRS)

    Krauspe, P.

    1984-01-01

    A measurement technique which continuously determines the wind vector on board an airliner during takeoff and landing is introduced. Its implementation is intended to deliver sufficient statistical background concerning low frequency wind changes in the atmospheric boundary layer and extended knowledge about deterministic wind shear modeling. The wind measurement scheme is described and the adaptation of apparatus onboard an A300 airbus is shown. Preliminary measurements made during level flight demonstrate the validity of the method.

  1. Adaptive PSF fitting - a highly performing photometric method and light curves of the GLS H1413+117: time delays and micro-lensing effects

    NASA Astrophysics Data System (ADS)

    Akhunov, T. A.; Wertz, O.; Elyiv, A.; Gaisin, R.; Artamonov, B. P.; Dudinov, V. N.; Nuritdinov, S. N.; Delvaux, C.; Sergeyev, A. V.; Gusev, A. S.; Bruevich, V. V.; Burkhonov, O.; Zheleznyak, A. P.; Ezhkova, O.; Surdej, J.

    2017-03-01

    We present new photometric observations of H1413+117 acquired during seasons between 2001 and 2008 in order to estimate the time delays between the lensed quasar images and to characterize at best the on-going micro-lensing events. We propose a highly performing photometric method called the adaptive point spread function fitting and have successfully tested this method on a large number of simulated frames. This has enabled us to estimate the photometric error bars affecting our observational results. We analysed the V- and R-band light curves and V-R colour variations of the A-D components which show short- and long-term brightness variations correlated with colour variations. Using the χ2 and dispersion methods, we estimated the time delays on the basis of the R-band light curves over the seasons between 2003 and 2006. We have derived the new values: ΔtAB = -17.4 ± 2.1, ΔtAC = -18.9 ± 2.8 and ΔtAD = 28.8 ± 0.7 d using the χ2 method (B and C are leading, D is trailing) with 1σ confidence intervals. We also used available observational constraints (resp. the lensed image positions, the flux ratios in mid-IR and two sets of time delays derived in the present work) to update the lens redshift estimation. We obtained z_l = 1.95^{+0.06}_{-0.10} which is in good agreement with previous estimations. We propose to characterize two kinds of micro-lensing events: micro-lensing for the A, B, C components corresponds to typical variations of ∼10-4 mag d-1 during all the seasons, while the D component shows an unusually strong micro-lensing effect with variations of up to ∼10-3 mag d-1 during 2004 and 2005.

  2. Modular on-board adaptive imaging

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Williams, D. S.

    1978-01-01

    Feature extraction involves the transformation of a raw video image to a more compact representation of the scene in which relevant information about objects of interest is retained. The task of the low-level processor is to extract object outlines and pass the data to the high-level process in a format that facilitates pattern recognition tasks. Due to the immense computational load caused by processing a 256x256 image, even a fast minicomputer requires a few seconds to complete this low-level processing. It is, therefore, necessary to consider hardware implementation of these low-level functions to achieve real-time processing speeds. The considered project had the objective to implement a system in which the continuous feature extraction process is not affected by the dynamic changes in the scene, varying lighting conditions, or object motion relative to the cameras. Due to the high bandwidth (3.5 MHz) and serial nature of the TV data, a pipeline processing scheme was adopted as the overall architecture of this system. Modularity in the system is achieved by designing circuits that are generic within the overall system.

  3. On-Board Chemical Propulsion Technology

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    On-board propulsion functions include orbit insertion, orbit maintenance, constellation maintenance, precision positioning, in-space maneuvering, de-orbiting, vehicle reaction control, planetary retro, and planetary descent/ascent. This paper discusses on-board chemical propulsion technology, including bipropellants, monopropellants, and micropropulsion. Bipropellant propulsion has focused on maximizing the performance of Earth storable propellants by using high-temperature, oxidation-resistant chamber materials. The performance of bipropellant systems can be increased further, by operating at elevated chamber pressures and/or using higher energy oxidizers. Both options present system level difficulties for spacecraft, however. Monopropellant research has focused on mixtures composed of an aqueous solution of hydroxl ammonium nitrate (HAN) and a fuel component. HAN-based monopropellants, unlike hydrazine, do not present a vapor hazard and do not require extraordinary procedures for storage, handling, and disposal. HAN-based monopropellants generically have higher densities and lower freezing points than the state-of-art hydrazine and can higher performance, depending on the formulation. High-performance HAN-based monopropellants, however, have aggressive, high-temperature combustion environments and require advances in catalyst materials or suitable non-catalytic ignition options. The objective of the micropropulsion technology area is to develop low-cost, high-utility propulsion systems for the range of miniature spacecraft and precision propulsion applications.

  4. The TWINS Instrument On Board Mars Insight Mission

    NASA Astrophysics Data System (ADS)

    Velasco, Tirso; Rodríguez-Manfredi, Jose A.

    2015-04-01

    The aim of this paper is to present the TWINS (Temperature and Wind sensors for INSight mission) instrument developed for the JPL Mars Insight Mission, to be launched by JPL in 2016. TWINS will provide high performance wind and and air temperature measurements for the mission platform TWINS is based on the heritage from REMS (Rover Environmental Monitoring Station) on board Curiosity Rover, which has been working successfully on Mars surface since August 2012. The REMS Booms Spare Hardware, comprising the Wind and Temperature Sensors, have been refurbished into TWINS Booms, with enhanced performances in terms of dynamic range and resolution. Its short-term development time and low cost, have shown the capability of REMS design and technologies developed for Curiosity to be adapted to a new mission and new scientific requirements, with increased performances. It is also an example of international cooperation in Planetary Missions that has been carried out in the frame of science instrments within Curiosity and InSight Missions.

  5. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    SciTech Connect

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  6. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  7. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  8. On-board sample cleaver.

    PubMed

    Månsson, Martin; Claesson, Thomas; Karlsson, Ulf O; Tjernberg, Oscar; Pailhés, Stéphane; Chang, Johan; Mesot, Joël; Shi, Ming; Patthey, Luc; Momono, Naoki; Oda, Migaku; Ido, Masayuki

    2007-07-01

    An on-board sample cleaver has been developed in order to cleave small and hard-to-cleave samples. To acquire good cleaves from rigid samples the alignment of the cleaving blade with respect to the internal crystallographic planes is crucial. To have the opportunity to mount the sample and align it to the blade ex situ has many advantages. The design presented has allowed us to cleave very tiny and rigid samples, e.g., the high-temperature superconductor La((2-x))Sr(x)CuO(4). Further, in this design the sample and the cleaver will have the same temperature, allowing us to cleave and keep the sample at low temperature. This is a big advantage over prior cleaver systems. As a result, better surfaces and alignments can be realized, which considerably simplifies and improves the experiments.

  9. On-Board Mining in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans

  10. HypsIRI On-Board Science Data Processing

    NASA Technical Reports Server (NTRS)

    Flatley, Tom

    2010-01-01

    Topics include On-board science data processing, on-board image processing, software upset mitigation, on-board data reduction, on-board 'VSWIR" products, HyspIRI demonstration testbed, and processor comparison.

  11. On-board satellite radionavigation systems

    NASA Astrophysics Data System (ADS)

    Kudriavtsev, Igor'v.; Mishchenko, Igor'n.; Volynkin, Anatolii I.; Shebshaevich, V. S.; Dubinko, Iu. S.

    Recent developments in the radionavigation equipment of ships are reviewed with particular reference to on-board satellite radionavigation systems. The Navstar navigation network is briefly characterized, and the general principles underlying the design of on-board navigation systems are reviewed. Particular attention is given to the software of on-board satellite navigation systems and their noise immunity characteristics. The accuracy of a navigation session is estimated, and some aspects of navigation equipment testing are discussed.

  12. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  13. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  14. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  15. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  16. On-board Data Mining

    NASA Astrophysics Data System (ADS)

    Tanner, Steve; Stein, Cara; Graves, Sara J.

    Networks of remote sensors are becoming more common as technology improves and costs decline. In the past, a remote sensor was usually a device that collected data to be retrieved at a later time by some other mechanism. This collected data were usually processed well after the fact at a computer greatly removed from the in situ sensing location. This has begun to change as sensor technology, on-board processing, and network communication capabilities have increased and their prices have dropped. There has been an explosion in the number of sensors and sensing devices, not just around the world, but literally throughout the solar system. These sensors are not only becoming vastly more sophisticated, accurate, and detailed in the data they gather but they are also becoming cheaper, lighter, and smaller. At the same time, engineers have developed improved methods to embed computing systems, memory, storage, and communication capabilities into the platforms that host these sensors. Now, it is not unusual to see large networks of sensors working in cooperation with one another. Nor does it seem strange to see the autonomous operation of sensorbased systems, from space-based satellites to smart vacuum cleaners that keep our homes clean and robotic toys that help to entertain and educate our children. But access to sensor data and computing power is only part of the story. For all the power of these systems, there are still substantial limits to what they can accomplish. These include the well-known limits to current Artificial Intelligence capabilities and our limited ability to program the abstract concepts, goals, and improvisation needed for fully autonomous systems. But it also includes much more basic engineering problems such as lack of adequate power, communications bandwidth, and memory, as well as problems with the geolocation and real-time georeferencing required to integrate data from multiple sensors to be used together.

  17. Survey and future directions of fault-tolerant distributed computing on board spacecraft

    NASA Astrophysics Data System (ADS)

    Fayyaz, Muhammad; Vladimirova, Tanya

    2016-12-01

    Current and future space missions demand highly reliable on-board computing systems, which are capable of carrying out high-performance data processing. At present, no single computing scheme satisfies both, the highly reliable operation requirement and the high-performance computing requirement. The aim of this paper is to review existing systems and offer a new approach to addressing the problem. In the first part of the paper, a detailed survey of fault-tolerant distributed computing systems for space applications is presented. Fault types and assessment criteria for fault-tolerant systems are introduced. Redundancy schemes for distributed systems are analyzed. A review of the state-of-the-art on fault-tolerant distributed systems is presented and limitations of current approaches are discussed. In the second part of the paper, a new fault-tolerant distributed computing platform with wireless links among the computing nodes is proposed. Novel algorithms, enabling important aspects of the architecture, such as time slot priority adaptive fault-tolerant channel access and fault-tolerant distributed computing using task migration are introduced.

  18. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  19. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  20. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  1. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  2. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  3. Intelligent On-Board Processing in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.

    2005-12-01

    Most existing sensing systems are designed as passive, independent observers. They are rarely aware of the phenomena they observe, and are even less likely to be aware of what other sensors are observing within the same environment. Increasingly, intelligent processing of sensor data is taking place in real-time, using computing resources on-board the sensor or the platform itself. One can imagine a sensor network consisting of intelligent and autonomous space-borne, airborne, and ground-based sensors. These sensors will act independently of one another, yet each will be capable of both publishing and receiving sensor information, observations, and alerts among other sensors in the network. Furthermore, these sensors will be capable of acting upon this information, perhaps altering acquisition properties of their instruments, changing the location of their platform, or updating processing strategies for their own observations to provide responsive information or additional alerts. Such autonomous and intelligent sensor networking capabilities provide significant benefits for collections of heterogeneous sensors within any environment. They are crucial for multi-sensor observations and surveillance, where real-time communication with external components and users may be inhibited, and the environment may be hostile. In all environments, mission automation and communication capabilities among disparate sensors will enable quicker response to interesting, rare, or unexpected events. Additionally, an intelligent network of heterogeneous sensors provides the advantage that all of the sensors can benefit from the unique capabilities of each sensor in the network. The University of Alabama in Huntsville (UAH) is developing a unique approach to data processing, integration and mining through the use of the Adaptive On-Board Data Processing (AODP) framework. AODP is a key foundation technology for autonomous internetworking capabilities to support situational awareness by

  4. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  5. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  6. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  7. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  8. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  9. Spacecraft on-board SAR processing technology

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.; Arens, W. E.

    1987-01-01

    This paper provides an assessment of the on-board SAR processing technology for Eos-type missions. The proposed Eos SAR sensor and flight data system are introduced, and the SAR processing requirements are described. The SAR on-board SAR processor architecture selection is discussed, and a baseline processor architecture using a frequency-domain processor for range correlation and a modular fault-tolerant VLSI time-domain parallel array for azimuth correlation are described. The mass storage and VLSI technologies needed for implementing the proposed SAR processing are assessed. It is shown that acceptable processor power and mass characteristics should be feasible for Eos-type applications. A proposed development strategy for the on-board SAR processor is presented.

  10. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  11. Laboratory measurements of on-board subsystems

    NASA Technical Reports Server (NTRS)

    Nuspl, P. P.; Dong, G.; Seran, H. C.

    1991-01-01

    Good progress was achieved on the test bed for on-board subsystems for future satellites. The test bed is for subsystems developed previously. Four test setups were configured in the INTELSAT technical labs: (1) TDMA on-board modem; (2) multicarrier demultiplexer demodulator; (3) IBS/IDR baseband processor; and (4) baseband switch matrix. The first three series of tests are completed and the tests on the BSM are in progress. Descriptions of test setups and major test results are included; the format of the presentation is outlined.

  12. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    SciTech Connect

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  13. Optimization of Planck-LFI on-board data handling

    NASA Astrophysics Data System (ADS)

    Maris, M.; Tomasi, M.; Galeotta, S.; Miccolis, M.; Hildebrandt, S.; Frailis, M.; Rohlfs, R.; Morisset, N.; Zacchei, A.; Bersanelli, M.; Binko, P.; Burigana, C.; Butler, R. C.; Cuttaia, F.; Chulani, H.; D'Arcangelo, O.; Fogliani, S.; Franceschi, E.; Gasparo, F.; Gomez, F.; Gregorio, A.; Herreros, J. M.; Leonardi, R.; Leutenegger, P.; Maggio, G.; Maino, D.; Malaspina, M.; Mandolesi, N.; Manzato, P.; Meharga, M.; Meinhold, P.; Mennella, A.; Pasian, F.; Perrotta, F.; Rebolo, R.; Türler, M.; Zonca, A.

    2009-12-01

    To asses stability against 1/f noise, the Low Frequency Instrument (LFI) on-board the Planck mission will acquire data at a rate much higher than the data rate allowed by the science telemetry bandwith of 35.5 Kbps. The data are processed by an on-board pipeline, followed on-ground by a decoding and reconstruction step, to reduce the volume of data to a level compatible with the bandwidth while minimizing the loss of information. This paper illustrates the on-board processing of the scientific data used by Planck/LFI to fit the allowed data-rate, an intrinsecally lossy process which distorts the signal in a manner which depends on a set of five free parameters (Naver, r1, r2, q, Script O) for each of the 44 LFI detectors. The paper quantifies the level of distortion introduced by the on-board processing as a function of these parameters. It describes the method of tuning the on-board processing chain to cope with the limited bandwidth while keeping to a minimum the signal distortion. Tuning is sensitive to the statistics of the signal and has to be constantly adapted during flight. The tuning procedure is based on a optimization algorithm applied to unprocessed and uncompressed raw data provided either by simulations, pre-launch tests or data taken in flight from LFI operating in a special diagnostic acquisition mode. All the needed optimization steps are performed by an automated tool, OCA2, which simulates the on-board processing, explores the space of possible combinations of parameters, and produces a set of statistical indicators, among them: the compression rate Cr and the processing noise epsilonQ. For Planck/LFI it is required that Cr = 2.4 while, as for other systematics, epsilonQ would have to be less than 10% of rms of the instrumental white noise. An analytical model is developed that is able to extract most of the relevant information on the processing errors and the compression rate as a function of the signal statistics and the processing parameters

  14. On-Board Propulsion System Analysis of High Density Propellants

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    1998-01-01

    The impact of the performance and density of on-board propellants on science payload mass of Discovery Program class missions is evaluated. A propulsion system dry mass model, anchored on flight-weight system data from the Near Earth Asteroid Rendezvous mission is used. This model is used to evaluate the performance of liquid oxygen, hydrogen peroxide, hydroxylammonium nitrate, and oxygen difluoride oxidizers with hydrocarbon and metal hydride fuels. Results for the propellants evaluated indicate that the state-of-art, Earth Storable propellants with high performance rhenium engine technology in both the axial and attitude control systems has performance capabilities that can only be exceeded by liquid oxygen/hydrazine, liquid oxygen/diborane and oxygen difluoride/diborane propellant combinations. Potentially lower ground operations costs is the incentive for working with nontoxic propellant combinations.

  15. Fuel cells going on-board

    NASA Astrophysics Data System (ADS)

    Sattler, Gunter

    Fuel cells provide great potential for use on-board ships. Possible fields of application for fuel cells on merchant ships and naval surface ships can generally be summarised as: (1) emergency power supply; (2) electric energy generation, especially in waters and harbours prescribing particular environmental regulations; (3) small power output for propulsion at special operating modes (e.g., very quiet run); and (4) electric power generation for the ship's network and, if required, the propulsion network on vessels equipped with electric power plants (e.g., naval vessels as all-electric ships, AES). In addition, the fuel cell has special importance for realising air-independent propulsion (AIP) on submarines. In the 1970s, the PEMFC system was chosen for AIP on German Navy submarines. Subsequently, this system underwent advanced development up to series maturity including storage on-board of the energy needed. This publication illustrates worldwide activities in this field, taking the various fuel cell system requirements for operation on-board merchant ships, naval surface ships and submarines into consideration. The focus is especially on AIP systems for German submarines because these have already gone into series production. Further developments are discussed which aim to improve the efficiency of hydrogen storage or to generate hydrogen on-board.

  16. Faculty Members on Boards of Trustees

    ERIC Educational Resources Information Center

    Ehrenberg, Ronald G.; Patterson, Richard W.; Key, Andrew V.

    2013-01-01

    During the 2011-12 academic year, a group of faculty and student researchers at the Cornell Higher Education Research Institute (CHERI) gathered information on which public and private institutions had faculty members on boards of trustees and obtained the names of the faculty members serving in these roles. In April and May 2012, the authors…

  17. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  18. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  19. AMIE Camera System on board SMART-1

    NASA Astrophysics Data System (ADS)

    Josset, J. L.; Beauvivre, S.; Amie Team

    The Advanced Moon micro-Imager Experiment AMIE on board ESA SMART-1 the first European mission to the Moon launched on 27th September 2003 is an imaging system with scientific technical and public outreach oriented objectives The science objectives are to image the Lunar Poles permanent shadow areas ice deposit eternal light crater rims ancient Lunar Non-mare volcanism local spectro-photometry and physical state of the lunar sur-face and to map high latitudes regions south mainly at far side South Pole Aitken basin The technical objectives are to perform a laserlink experiment detec-tion of laser beam emitted by ESA Tenerife ground station flight demonstration of new technologies and on-board autonomy navigation The public outreach and educational objectives are to promote planetary exploration We present the AMIE instrument and perfomances with respect to the first results

  20. MODIS On-board Blackbody Performance

    NASA Technical Reports Server (NTRS)

    Xiong, Xiaoxiong; Chen, N.; Wu, A.; Wenny, B.; Dodd, J.

    2008-01-01

    Currently, there are two MODIS instruments operated on-orbit: one on-board the Terra spacecraft launched in December 1999 and the other on-board the Aqua spacecraft launched in May 2002. MODIS is a scanning radiometer that has 16 thermal emissive bands (TEBs) in the MWIR and LWIR regions. The remaining spectral bands are in the VISINIR and SWIR regions. The TEBs have a total of 160 detectors (10 detectors per band), which are calibrated on-orbit using an on-board blackbody (BB). MODIS TEB calibration is performed via a quadratic algorithm with its linear calibration coefficients updated on a scan-by-scan basis using each detector's response to the BB. The offset and nonlinear terms of the quadratic calibration equation are stored in a look-up table (LUT). The LUT parameters are derived from pre-launch calibration and updated on-orbit from BB observations, as needed. Typically, the BB is set at a fixed temperature. Periodically, a warm-up and cool-down activity is performed, which enables the BB temperature to be varied from instrument ambient up to 315K. The temperature of the BB is measured each scan using 12 thermistors, which were fully characterized pre-launch with reference to the NIST temperature scale. This paper describes MODIS on-board BB operational activities and performance. The TEB detector response (short-term stability and long-term changes) and noise characterization results derived from BB observations and their impact on the TEB calibration uncertainty are also presented.

  1. On-board computers for control

    NASA Technical Reports Server (NTRS)

    Scull, J. R.

    1980-01-01

    On-board computers for control and sequencing from Apollo to Voyager are described along with future trends and recent design examples. Consideration is given to a high-order language for the Space Shuttle program. Emphasis is placed on the usage of modern LSI and new distributed architectural approaches. The distributed computer of the Galileo spacecraft and the data processing system for the Shuttle Orbiter are outlined.

  2. The Advanced On-board Processor (AOP)

    NASA Technical Reports Server (NTRS)

    Hartenstein, R. G.; Trevathan, C. E.; Stewart, W. N.

    1971-01-01

    The goal of the Advanced On-Board Processor (AOP) development program is to design, build, and flight qualify a highly reliable, moderately priced, digital computer for application on a variety of spacecraft. Included in this development program is the preparation of a complete support software package which consists of an assembler, simulator, loader, system diagnostic, operational executive, and many useful subroutines. The AOP hardware/software system is an extension of the On-Board Processor (OBP) which was developed for general purpose use on earth orbiting spacecraft with its initial application being on-board the fourth Orbiting Astronomical Observatory (OAO-C). Although the OBP possesses the significant features that are required for space application, however, when operating at 100% duty cycle the OBP is too power-consuming for use on many smaller spacecraft. Computer volume will be minimized by implementing the processor and input/output portions of the machine with large scale integrated circuits. Power consumption will be reduced through the use of plated wire and, in some cases, semiconductor memory elements.

  3. On-Board Training for US Payloads

    NASA Technical Reports Server (NTRS)

    Murphy, Benjamin; Meacham, Steven (Technical Monitor)

    2001-01-01

    The International Space Station (ISS) crew follows a training rotation schedule that puts them in the United States about every three months for a three-month training window. While in the US, the crew receives training on both ISS systems and payloads. Crew time is limited, and system training takes priority over payload training. For most flights, there is sufficient time to train all systems and payloads. As more payloads are flown, training time becomes a more precious resource. Less training time requires payload developers (PDs) to develop alternatives to traditional ground training. To ensure their payloads have sufficient training to achieve their scientific goals, some PDs have developed on-board trainers (OBTs). These OBTs are used to train the crew when no or limited ground time is available. These lessons are also available on-orbit to refresh the crew about their ground training, if it was available. There are many types of OBT media, such as on-board computer based training (OCBT), video/photo lessons, or hardware simulators. The On-Board Training Working Group (OBTWG) and Courseware Development Working Group (CDWG) are responsible for developing the requirements for the different types of media.

  4. A Feminist Framework for Nurses on Boards.

    PubMed

    Sundean, Lisa J; Polifroni, E Carol

    Nurses' knowledge, skills, and expertise uniquely situate them to contribute to health care transformation as equal partners in organizational board governance. The Institute of Medicine, the 10,000 Nurses on Boards Coalition, and a growing number of nurse and health care scholars advocate nurse board leadership; however, nurses are rarely appointed as voting board members. When no room is made for nurses to take a seat at the table, the opportunity is lost to harness the power of nursing knowledge for health care transformation and social justice. No philosophical framework underpins the emerging focus on nurse board leadership. The purpose of this article is to add to the extant nursing literature by suggesting feminism as a philosophical framework for nurses on boards. Feminism contributes to the knowledge base of nursing as it relates to the expanding roles of nurses in health care transformation, policy, and social justice. Furthermore, a feminist philosophical framework for nurses on boards sets the foundation for new theory development and validates ongoing advancement of the nursing profession.

  5. Multilayer high performance insulation materials

    NASA Technical Reports Server (NTRS)

    Stuckey, J. M.

    1971-01-01

    A number of tests are required to evaluate both multilayer high performance insulation samples and the materials that comprise them. Some of the techniques and tests being employed for these evaluations and some of the results obtained from thermal conductivity tests, outgassing studies, effect of pressure on layer density tests, hypervelocity impact tests, and a multilayer high performance insulation ambient storage program at the Kennedy Space Center are presented.

  6. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  7. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  8. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  9. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  10. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  11. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  12. Flight experiences on board Space Station Mir

    NASA Astrophysics Data System (ADS)

    Viehboeck, Franz

    1992-07-01

    A survey of the training in the cosmonaut center 'Yuri Gagarin' near Moscow (U.S.S.R.) and of the preparation for the joint Soviet-Austrian space flight from 2-10 Oct. 1991 is given. The flight in Soyuz-TM 13 with the most important systems, as well as a short description of the Space Station Mir, the life on board the Station with the basic systems, like energy supply, life support, radio, and television are described. The possibilities of exploitation of the Space Station Mir and an outlook to the future is given.

  13. On-Board Rendezvous Targeting for Orion

    NASA Technical Reports Server (NTRS)

    Weeks, Michael W.; DSouza, Christopher N.

    2010-01-01

    The Orion On-board GNC system is among the most complex ever developed for a space mission. It is designed to operate autonomously (independent of the ground). The rendezvous system in particular was designed to operate on the far side of the moon, and in the case of loss-of-communications with the ground. The vehicle GNC system is designed to retarget the rendezvous maneuvers, given a mission plan. As such, all the maneuvers which will be performed by Orion, have been designed and are being incorporated into the flight code.

  14. On-board processing for telecommunications satellites

    NASA Technical Reports Server (NTRS)

    Nuspl, P. P.; Dong, G.

    1991-01-01

    In this decade, communications satellite systems will probably face dramatic challenges from alternative transmission means. To balance and overcome such competition, and to prepare for new requirements, INTELSAT has developed several on-board processing techniques, including Satellite-Switched TDMA (SS-TDMA), Satellite-Switched FDMA (SS-FDMA), several Modulators/Demodulators (Modem), a Multicarrier Multiplexer and Demodulator MCDD), an International Business Service (IBS)/Intermediate Data Rate (IDR) BaseBand Processor (BBP), etc. Some proof-of-concept hardware and software were developed, and tested recently in the INTELSAT Technical Laboratories. These techniques and some test results are discussed.

  15. Autonomous & Adaptive Oceanographic Feature Tracking on Board Autonomous Underwater Vehicles

    DTIC Science & Technology

    2015-02-01

    Underwater Vehicles MIT/WHOI Joint Program in Oceanography/ Applied Ocean Science and Engineering Massachusetts Institute of Technology Woods Hole ...Massachusetts Institute of Technology Cambridge, Massachusetts 02139 and Woods Hole Oceanographic Institution Woods Hole , Massachusetts 02543...Undersea Warfare Center and the Woods Hole Oceanographic Institution Academic Programs Office. Reproduction in whole or in part is permitted for any

  16. On-Board Entry Trajectory Planning Expanded to Sub-orbital Flight

    NASA Technical Reports Server (NTRS)

    Lu, Ping; Shen, Zuojun

    2003-01-01

    A methodology for on-board planning of sub-orbital entry trajectories is developed. The algorithm is able to generate in a time frame consistent with on-board environment a three-degree-of-freedom (3DOF) feasible entry trajectory, given the boundary conditions and vehicle modeling. This trajectory is then tracked by feedback guidance laws which issue guidance commands. The current trajectory planning algorithm complements the recently developed method for on-board 3DOF entry trajectory generation for orbital missions, and provides full-envelope autonomous adaptive entry guidance capability. The algorithm is validated and verified by extensive high fidelity simulations using a sub-orbital reusable launch vehicle model and difficult mission scenarios including failures and aborts.

  17. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  18. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  19. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  20. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  1. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  2. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.

  3. High performance dielectric materials development

    NASA Astrophysics Data System (ADS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-09-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  4. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  5. High-performance sports medicine.

    PubMed

    Speed, Cathy

    2013-02-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition. The physician has a broad remit and acts as a 'medical guardian' to optimise health while minimising risks. This review describes this interesting field of medicine, its unique challenges and priorities for the physician in delivering best healthcare.

  6. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  7. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  8. Reference Architecture for High Dependability On-Board Computers

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Esper, Alexandre; Zandin, Johan; Barbosa, Ricardo; Monteleone, Claudio

    2014-08-01

    The industrial process in the area of on-board computers is characterized by small production series of on- board computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well-defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of on- board computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.

  9. High performance pyroelectric infrared detector

    NASA Astrophysics Data System (ADS)

    Hu, Xu; Luo, Haosu; Ji, Yulong; Yang, Chunli

    2015-10-01

    Single infrared detector made with Relaxative ferroelectric crystal(PMNT) present excellence performance. In this paper include detector capacitance, characteristic of frequency--response, characteristic of detectivity. The measure result show that detectivity of detector made with relaxative ferroelectric crystal(PMNT) exceed three times than made with LT, the D*achieved than 1*109cmHz0.5W-1. The detector will be applied on NDIR spectrograph, FFT spectrograph and so on. The high performance pyroelectric infrared detector be developed that will be broadened application area of infrared detector.

  10. Looking at science on board Eureca

    NASA Astrophysics Data System (ADS)

    Minster, Olivier; Innocenti, Louisa; Mesland, Dick; Heerd, Stefan

    1993-05-01

    The European Retrievable Carrier (Eureca) is the first platform designed to be retrieved from orbit and possibly to be relaunched with a different payload. For its first mission in July 1992, Eureca was deployed from the Space Shuttle Atlantis (flight STS 46) by ESA astronaut and mission specialist, Claude Nicollier. It then used its own propulsion system to transfer to its operational orbit at 508 km where, after commissioning, payload operation started. Since Eureca has been specially designed for investigations in the microgravity environment, this publication concentrates mainly on this aspect of the mission. Microgravity research is a newcomer to the scientific disciplines and is perhaps not very well known to, or understood by, those not directly involved. An explanation of the microgravity environment is presented. This is followed by an explanation of the interest microgravity holds for researchers, and a review of the different scientific fields, Material Sciences, Fluid Physics, Life Sciences, for which investigations have been carried out during the Eureca mission, together with a brief description of the on-board facilities and their flight performances. The last two chapters describe the Space Science and technology experiments carried out during the mission.

  11. Vibration on board and health effects.

    PubMed

    Jensen, Anker; Jepsen, Jørgen Riis

    2014-01-01

    There is only limited knowledge of the exposure to vibrations of ships' crews and their risk of vibration-induced health effects. Exposure to hand-arm vibrations from the use of vibrating tools at sea does not differ from that in the land-based trades. However, in contrast to most other work places, seafarers are also exposed to vibrations to the feet when standing on vibrating surfaces on board. Anecdotal reports have related the development of "white feet" to local exposure to vibration, e.g. in mining, but this connection has not been investigated in the maritime setting. As known from studies of the health consequences of whole body vibrations in land-transportation, such exposure at sea may affect ships' passengers and crews. While the relation of back disorders to high levels of whole body vibration has been demonstrated among e.g. tractor drivers, there are no reported epidemiological evidence for such relation among seafarers except for fishermen, who, however, are also exposed to additional recognised physical risk factors at work. The assessment and reduction of vibrations by naval architects relates to technical implications of this impact for the ships' construction, but has limited value for the estimation of health risks because they express the vibration intensity differently that it is done in a medical context.

  12. High-performance permanent magnets.

    PubMed

    Goll, D; Kronmüller, H

    2000-10-01

    High-performance permanent magnets (pms) are based on compounds with outstanding intrinsic magnetic properties as well as on optimized microstructures and alloy compositions. The most powerful pm materials at present are RE-TM intermetallic alloys which derive their exceptional magnetic properties from the favourable combination of rare earth metals (RE = Nd, Pr, Sm) with transition metals (TM = Fe, Co), in particular magnets based on (Nd.Pr)2Fe14B and Sm2(Co,Cu,Fe,Zr)17. Their development during the last 20 years has involved a dramatic improvement in their performance by a factor of > 15 compared with conventional ferrite pms therefore contributing positively to the ever-increasing demand for pms in many (including new) application fields, to the extent that RE-TM pms now account for nearly half of the worldwide market. This review article first gives a brief introduction to the basics of ferromagnetism to confer an insight into the variety of (permanent) magnets, their manufacture and application fields. We then examine the rather complex relationship between the microstructure and the magnetic properties for the two highest-performance and most promising pm materials mentioned. By using numerical micromagnetic simulations on the basis of the Finite Element technique the correlation can be quantitatively predicted, thus providing a powerful tool for the further development of optimized high-performance pms.

  13. High-performance permanent magnets

    NASA Astrophysics Data System (ADS)

    Goll, D.; Kronmüller, H.

    High-performance permanent magnets (pms) are based on compounds with outstanding intrinsic magnetic properties as well as on optimized microstructures and alloy compositions. The most powerful pm materials at present are RE-TM intermetallic alloys which derive their exceptional magnetic properties from the favourable combination of rare earth metals (RE=Nd, Pr, Sm) with transition metals (TM=Fe, Co), in particular magnets based on (Nd,Pr)2Fe14B and Sm2(Co,Cu,Fe,Zr)17. Their development during the last 20 years has involved a dramatic improvement in their performance by a factor of >15 compared with conventional ferrite pms therefore contributing positively to the ever-increasing demand for pms in many (including new) application fields, to the extent that RE-TM pms now account for nearly half of the worldwide market. This review article first gives a brief introduction to the basics of ferromagnetism to confer an insight into the variety of (permanent) magnets, their manufacture and application fields. We then examine the rather complex relationship between the microstructure and the magnetic properties for the two highest-performance and most promising pm materials mentioned. By using numerical micromagnetic simulations on the basis of the Finite Element technique the correlation can be quantitatively predicted, thus providing a powerful tool for the further development of optimized high-performance pms.

  14. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  15. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  16. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  17. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  18. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  19. Toward High-Performance Organizations.

    ERIC Educational Resources Information Center

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  20. High Performance Perovskite Solar Cells.

    PubMed

    Tong, Xin; Lin, Feng; Wu, Jiang; Wang, Zhiming M

    2016-05-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long-term stable all-solid-state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost-effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole-transporting materials (HTMs) and electron-transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  1. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction. PMID:27774402

  2. Toward high performance graphene fibers.

    PubMed

    Chen, Li; He, Yuling; Chai, Songgang; Qiang, Hong; Chen, Feng; Fu, Qiang

    2013-07-07

    Two-dimensional graphene and graphene-based materials have attracted tremendous interest, hence much attention has been drawn to exploring and applying their exceptional characteristics and properties. Integration of graphene sheets into macroscopic fibers is a very important way for their application and has received increasing interest. In this study, neat and macroscopic graphene fibers were continuously spun from graphene oxide (GO) suspensions followed by chemical reduction. By varying wet-spinning conditions, a series of graphene fibers were prepared, then, the structural features, mechanical and electrical performances of the fibers were investigated. We found the orientation of graphene sheets, the interaction between inter-fiber graphene sheets and the defects in the fibers have a pronounced effect on the properties of the fibers. Graphene fibers with excellent mechanical and electrical properties will yield great advances in high-tech applications. These findings provide guidance for the future production of high performance graphene fibers.

  3. High Performance Flexible Thermal Link

    NASA Astrophysics Data System (ADS)

    Sauer, Arne; Preller, Fabian

    2014-06-01

    The paper deals with the design and performance verification of a high performance and flexible carbon fibre thermal link.Project goal was to design a space qualified thermal link combining low mass, flexibility and high thermal conductivity with new approaches regarding selected materials and processes. The idea was to combine the advantages of existing metallic links regarding flexibility and the thermal performance of high conductive carbon pitch fibres. Special focus is laid on the thermal performance improvement of matrix systems by means of nano-scaled carbon materials in order to improve the thermal performance also perpendicular to the direction of the unidirectional fibres.One of the main challenges was to establish a manufacturing process which allows handling the stiff and brittle fibres, applying the matrix and performing the implementation into an interface component using unconventional process steps like thermal bonding of fibres after metallisation.This research was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi).

  4. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  5. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  6. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  7. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  8. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  9. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  10. Comprehension and acceptability of on-board traffic information: Beliefs and driving behaviour.

    PubMed

    Cristea, Mioara; Delhomme, Patricia

    2014-04-01

    Co-Drive on-board traffic information system is a complementary tool providing a dynamic management of transportation infrastructure and traffic as well as the diffusion of accurate real-time information about the road environment and motorists' driving behaviour. The aim of this study was to examine drivers' acceptability of Co-Drive by investigating the impact of traffic information provided via on-board display devices on motorists' beliefs and behaviour. 116 drivers (Men=46.6%), between 22 and 62 years, participated to a driving simulator experiment. They were randomly divided into two experimental groups according to the type of display device (Blackberry vs. iPhone) and a control group. The experimental groups were exposed to fourteen on-board traffic messages: warning (e.g., road crash), recommendation (e.g., the use of seat-belt) and comfort messages (e.g., the location of a gas station). They had to validate each message by pushing the headlight flashing button as soon as they understood it. At the end, all participants had to fill in a questionnaire. Drivers evaluated positively the on-board messages, expressed a high level of confidence in the on-board information and estimated having received it sufficiently in advance for them to adjust their behaviour. Regardless of the type of display device, they took more time to read warning and recommendation messages as compared to comfort messages and complied with them. Finally, those exposed to the messages adapted their behaviour easier to the road events than those who did not receive them. Practical implications of the results are discussed.

  11. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  12. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  13. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  14. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  15. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  16. The Experiment CPLM (Comportamiento De Puentes Líquidos En Microgravedad) On Board MINISAT 01

    NASA Astrophysics Data System (ADS)

    Sanz-Andrés, Angel; Rodríguez-De-Francisco, Pablo; Santiago-Prowald, Julián

    2001-03-01

    The Universidad Politécnica de Madrid participates in the MINISAT 01 program as the experiment CPLM responsible. This experiment aims at the study of the fluid behaviour in reduced gravity conditions. The interest of this study is and has been widely recognised by the scientific community and has potential applications in the pharmaceutical and microelectronic technologies (crystal growth), among others. The scientific team which has developed the CPLM experiment has a wide experience in this field and had participate in the performance of a large number of experiments on the fluid behaviour in reduced gravity conditions in flight (Spacelab missions, TEXUS sounding rockets, KC-135 and Caravelle aeroplanes, drop towers, as well as on earth labs (neutral buoyancy and small scale simulations). The experimental equipment used in CPLMis a version of the payload developed for experimentation on drop towers and on board microsatellites as the UPM-Sat 1, adapted to fly on board MINISAT 01.

  17. XMM instrument on-board software maintenance concept

    NASA Technical Reports Server (NTRS)

    Peccia, N.; Giannini, F.

    1994-01-01

    While the pre-launch responsibility for the production, validation and maintenance of instrument on-board software traditionally lies with the experimenter, the post-launch maintenance has been the subject of ad hoc arrangements with the responsibility shared to different extent between the experimenter, ESTEC and ESOC. This paper summarizes the overall design and development of the instruments on-board software for the XMM satellite, and describes the concept adopted for the maintenance of such software post-launch. The paper will also outline the on-board software maintenance and validation facilities and the expected advantages to be gained by the proposed strategy. Conclusions with respect to adequacy of this approach will be presented as well as recommendations for future instrument on-board software developments.

  18. On-board congestion control for satellite packet switching networks

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.

    1991-01-01

    It is desirable to incorporate packet switching capability on-board for future communication satellites. Because of the statistical nature of packet communication, incoming traffic fluctuates and may cause congestion. Thus, it is necessary to incorporate a congestion control mechanism as part of the on-board processing to smooth and regulate the bursty traffic. Although there are extensive studies on congestion control for both baseband and broadband terrestrial networks, these schemes are not feasible for space based switching networks because of the unique characteristics of satellite link. Here, we propose a new congestion control method for on-board satellite packet switching. This scheme takes into consideration the long propagation delay in satellite link and takes advantage of the the satellite's broadcasting capability. It divides the control between the ground terminals and satellite, but distributes the primary responsibility to ground terminals and only requires minimal hardware resource on-board satellite.

  19. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    The feasibility was investigated of an on-board earth resources data processor launched during the 1980-1990 time frame. Projected user applications were studied to define the data formats and the information extraction algorithms that the processor must execute. Based on these constraints, and the constraints imposed by the available technology, on-board processor systems were designed and their feasibility evaluated. Conclusions and recommendations are given.

  20. Advanced Hybrid On-Board Science Data Processor - SpaceCube 2.0

    NASA Technical Reports Server (NTRS)

    Flatley, Tom

    2010-01-01

    Topics include an overview of On-board science data processing, software upset mitigation, on-board data reduction, on-board products, HyspIRI demonstration testbed, SpaceCube 2.0 block diagram, and processor comparison.

  1. Indoor Air Quality in High Performance Schools

    EPA Pesticide Factsheets

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  2. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  3. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  4. On-board target acquisition for CHEOPS

    NASA Astrophysics Data System (ADS)

    Loeschl, P.; Ferstl, R.; Kerschbaum, F.; Ottensamer, R.

    2016-07-01

    The CHaracterising ExOPlanet Satellite (CHEOPS) is the first ESA S-class and exoplanetary follow-up mission headed for launch in 2018. It will perform ultra-high-precision photometry of stars hosting confirmed exoplanets on a 3-axis stabilised sun-synchronous orbit that is optimised for uninterrupted observations at minimum stray light and thermal variations. Nevertheless, due to the satellites structural design, the alignment of the star trackers and the payload instrument telescope is affected by thermo-elastic deformations. This causes a high pointing uncertainty, which requires the payload instrument to provide an additional acquisition system for distinct target identification. Therefor a star extraction software and two star identification algorithms, originally designed for star trackers, were adapted and optimised for the special case of CHEOPS. In order to evaluate these algorithms reliability, thousands of random star configurations were analysed in Monte-Carlo simulations. We present the implemented identification methods and their performance as well as recommended parameters that guarantee a successful identification under all conditions.

  5. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  6. On-Board Spaceborne Real-time Digital Signal Processing

    NASA Astrophysics Data System (ADS)

    Gao, G.; Long, F.; Liu, L.

    begin center Abstract end center This paper reports a preliminary study result of an on-board digital signal processing system It consists of the on-board processing requirement analysis functional specifications and implementation with the radiation tolerant field-programmable gate array FPGA technology The FPGA program is designed in the VHDL hardware description language and implemented onto a high density F PGA chip The design takes full advantage of the massively parallel architecture of the VirtexII FPGA logic slices to achieve real-time processing at a big data rate Further more an FFT algorithm s implementation with the system is provided as an illustration

  7. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  8. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  9. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  10. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E; Pollard, Martin J; Elkin, Christopher J

    2005-10-11

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

  11. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

    2006-12-12

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

  12. High Performance Work Practices and Firm Performance.

    ERIC Educational Resources Information Center

    Department of Labor, Washington, DC. Office of the American Workplace.

    A literature survey established that a substantial amount of research has been conducted on the relationship between productivity and the following specific high performance work practices: employee involvement in decision making, compensation linked to firm or worker performance, and training. According to these studies, high performance work…

  13. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  14. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  15. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  16. Intelligent Sensors and Components for On-Board ISHM

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge; Morris, Jon; Nickles, Donald; Schmalzel, Jorge; Rauth, David; Mahajan, Ajay; Utterbach, L.; Oesch, C.

    2006-01-01

    A viewgraph presentation on the development of intelligent sensors and components for on-board Integrated Systems Health Health Management (ISHM) is shown. The topics include: 1) Motivation; 2) Integrated Systems Health Management (ISHM); 3) Intelligent Components; 4) IEEE 1451; 5)Intelligent Sensors; 6) Application; and 7) Future Directions

  17. Real-time Java for on-board systems

    NASA Astrophysics Data System (ADS)

    Cechticky, V.; Pasetti, A.

    2002-07-01

    The Java language has several attractive features but cannot at present be used in on-board systems primarily because it lacks support for hard real-time operation. This shortcoming is in being addressed: some suppliers are already providing implementations of Java that are RT-compliant; Sun Microsystem has approved a formal specification for a real-time extension of the language; and an independent consortium is working on an alternative specification for real-time Java. It is therefore expected that, within a year or so, standardized commercial implementations of real-time Java will be on the market. Availability of real-time implementations now opens the way to its use on-board. Within this context, this paper has two objectives. Firstly, it discusses the suitability of Java for on-board applications. Secondly, it reports the results of an ESA study to port a software framework for on-board control systems to a commercial real-time version of Java.

  18. Economic Comparison of On-Board Module Builder Harvest Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cotton pickers with on-board module builders (OBMB) eliminates the need for boll buggies, module builders, the tractors, and labor needed to operate this machinery. Additionally, field efficiency may be increased due to less stoppage for unloading and/or waiting to unload. This study estimates the ...

  19. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for a multiple number of driving cycles (i.e., more than one) due to the continued presence of extreme... Network Interface,” (Revised, May 2001) shall be used as the on-board to off-board communications protocol... section according to the phase-in schedule in paragraph (k) of this section. All monitored systems...

  20. Effective On-Board Diagnostics for electronic engine controls

    SciTech Connect

    Florence, D.E.; Michel, M.F.

    1985-01-01

    Properly implemented, On-Board Diagnostic (OBD) Systems fill the gap in sophistication between computer based fuel injection engine controls and a carburetor oriented service industry. By emphasizing simplicity and credibility, inexpensive OBD systems make electronic engine controls a desirable feature to the service technician.

  1. On-Board Software Reference Architecture for Payloads

    NASA Astrophysics Data System (ADS)

    Bos, Victor; Trcka, Adam

    2015-09-01

    This abstract summarizes the On-board Reference Architecture for Payloads activity carried out by Space Systems Finland (SSF) and Evolving Systems Consulting (ESC) under ESA contract. At the time of writing, the activity is ongoing. This abstract discusses study objectives, related activities, study approach, achieved and anticipated results, and directions for future work.

  2. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  3. Rapidly Reconfigurable High Performance Computing Cluster

    DTIC Science & Technology

    2005-07-01

    1 SECTION 2 BACKGROUN D AN D OBJECTIVES ......................................................................... 2 2.1 H...igh Perform ance Com puting Trends ................................................................................ 2 2.2 Georgia Tech Activity in H PEC

  4. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  5. LANL High-Performance Data System (HPDS)

    NASA Technical Reports Server (NTRS)

    Collins, M. William; Cook, Danny; Jones, Lynn; Kluegel, Lynn; Ramsey, Cheryl

    1993-01-01

    The Los Alamos High-Performance Data System (HPDS) is being developed to meet the very large data storage and data handling requirements of a high-performance computing environment. The HPDS will consist of fast, large-capacity storage devices that are directly connected to a high-speed network and managed by software distributed in workstations. The HPDS model, the HPDS implementation approach, and experiences with a prototype disk array storage system are presented.

  6. Architecture Analysis of High Performance Capacitors (POSTPRINT)

    DTIC Science & Technology

    2009-07-01

    includes the measurement of heat dissipated from a recently developed fluorenyl polyester (FPE) capacitor under an AC excitation. II. Capacitor ...AFRL-RZ-WP-TP-2010-2100 ARCHITECTURE ANALYSIS OF HIGH PERFORMANCE CAPACITORS (POSTPRINT) Hiroyuki Kosai and Tyler Bixel UES, Inc...2009 4. TITLE AND SUBTITLE ARCHITECTURE ANALYSIS OF HIGH PERFORMANCE CAPACITORS (POSTPRINT) 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c

  7. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  8. On-Board Switching and Routing Advanced Technology Study

    NASA Technical Reports Server (NTRS)

    Yegenoglu, F.; Inukai, T.; Kaplan, T.; Redman, W.; Mitchell, C.

    1998-01-01

    Future satellite communications is expected to be fully integrated into National and Global Information Infrastructures (NII/GII). These infrastructures will carry multi gigabit-per-second data rates, with integral switching and routing of constituent data elements. The satellite portion of these infrastructures must, therefore, be more than pipes through the sky. The satellite portion will also be required to perform very high speed routing and switching of these data elements to enable efficient broad area coverage to many home and corporate users. The technology to achieve the on-board switching and routing must be selected and developed specifically for satellite application within the next few years. This report presents evaluation of potential technologies for on-board switching and routing applications.

  9. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.

  10. On-board processing concepts for future satellite communications systems

    NASA Technical Reports Server (NTRS)

    Brandon, W. T. (Editor); White, B. E. (Editor)

    1980-01-01

    The initial definition of on-board processing for an advanced satellite communications system to service domestic markets in the 1990's is discussed. An exemplar system with both RF on-board switching and demodulation/remodulation baseband processing is used to identify important issues related to system implementation, cost, and technology development. Analyses of spectrum-efficient modulation, coding, and system control techniques are summarized. Implementations for an RF switch and baseband processor are described. Among the major conclusions listed is the need for high gain satellites capable of handling tens of simultaneous beams for the efficient reuse of the 2.5 GHz 30/20 frequency band. Several scanning beams are recommended in addition to the fixed beams. Low power solid state 20 GHz GaAs FET power amplifiers in the 5W range and a general purpose digital baseband processor with gigahertz logic speeds and megabits of memory are also recommended.

  11. Autonomous On-Board Calibration of Attitude Sensors and Gyros

    NASA Technical Reports Server (NTRS)

    Pittelkau, Mark E.

    2007-01-01

    This paper presents the state of the art and future prospects for autonomous real-time on-orbit calibration of gyros and attitude sensors. The current practice in ground-based calibration is presented briefly to contrast it with on-orbit calibration. The technical and economic benefits of on-orbit calibration are discussed. Various algorithms for on-orbit calibration are evaluated, including some that are already operating on board spacecraft. Because Redundant Inertial Measurement Units (RIMUs, which are IMUs that have more than three sense axes) are almost ubiquitous on spacecraft, special attention will be given to calibration of RIMUs. In addition, we discuss autonomous on board calibration and how it may be implemented.

  12. On-Board Processor and Network Maturation for Ariane 6

    NASA Astrophysics Data System (ADS)

    Clavier, Rémi; Sautereau, Pierre; Sangaré, Jérémie; Disson, Benjamin

    2015-09-01

    In the past three years, innovative avionic technologies for Ariane 6 were evaluated in the tail of three main programs involving various stakeholders: FLPP (Future Launcher Preparatory Program, from ESA), AXE (Avionic-X European, formerly Avionique-X, French public R&T program) and CNES R&T program relying on industrial partnerships. In each avionics’ domain, several technologies were compared, analyzed and tested regarding space launchers system expectations and constraints. Within the frame of on-board data handling, two technologies have been identified as promising: ARM based microprocessors for the computing units and TTEthernet for the on-board network. This paper presents the main outcomes of the data handling preparatory activities performed on the AXE platform in Airbus Defence and Space - Les Mureaux.

  13. Octafluoropropane Concentration Dynamics on Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Perry, J. L.

    2003-01-01

    Since activating the International Space Station s (IS9 Service Module in November 2000, archival air quality samples have shown highly variable concentrations of octafluoropropane in the cabin. This variability has been directly linked to leakage from air conditioning systems on board the Service Module, Zvezda. While octafluoro- propane is not highly toxic, it presents a significant chal- lenge to the trace contaminant control systems. A discussion of octafluoropropane concentration dynamics is presented and the ability of on board trace contami- nant control systems to effectively remove octafluoropro- pane from the cabin atmosphere is assessed. Consideration is given to operational and logistics issues that may arise from octafluoropropane and other halo- carbon challenges to the contamination control systems as well as the potential for effecting cabin air quality.

  14. EMI Standards for Wireless Voice and Data on Board Aircraft

    NASA Technical Reports Server (NTRS)

    Ely, Jay J.; Nguyen, Truong X.

    2002-01-01

    The use of portable electronic devices (PEDs) on board aircraft continues to be an increasing source of misunderstanding between passengers and flight-crews, and consequently, an issue of controversy between wireless product manufacturers and air transport regulatory authorities. This conflict arises primarily because of the vastly different regulatory objectives between commercial product and airborne equipment standards for avoiding electromagnetic interference (EMI). This paper summarizes international regulatory limits and test processes for measuring spurious radiated emissions from commercially available PEDs, and compares them to international standards for airborne equipment. The goal is to provide insight for wireless product developers desiring to extend the freedom of their customers to use wireless products on-board aircraft, and to identify future product characteristics, test methods and technologies that may facilitate improved wireless freedom for airline passengers.

  15. Harmonic distortions measured on board of a maritime vessel

    NASA Astrophysics Data System (ADS)

    Zburlea, Elena; Dordea, Stefan

    2016-12-01

    Measurements where performed on four channels by means of an autonomous equipment (galvanic separated and not supplied from the ship's mains) performed on board of some maritime transport vessels, inside the Port of Constanţa aquatorium. Distorted voltages where state in the distribution panels. The sources of those distortions are the switching power supplies of the electric drives. The novelty of our work states in performing those measurements during the inside port maneuvers, when the operating time of each electric equipment is non definable. Harmonic distortions caused by the switching power converters lower the Power Factor. There is no better manner to find out the main distortions sources on board of a maritime transport vessel than to perform the measurements directly, on each location.

  16. An On-Board Diagnosis Logic and Its Design Method

    NASA Astrophysics Data System (ADS)

    Hiratsuka, Satoshi; Fusaoka, Akira

    In this paper, we propose a design methodology for on-board diagnosis engine of embedded systems. A boolean function for diagnosis circuit can be mechanically designed from the system dynamics given by the linear differential equation if it is observable, and also if the relation is given between the set of abnormal physical parameters and the faulty part. The size of diagnosis circuit is not so large that it can be implemented in FPGA or fabricated in a simple chip.

  17. On-Board Perception System For Planetary Aerobot Balloon Navigation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Scheid, Robert E.; T. Salomon, Phil

    1996-01-01

    NASA's Jet Propulsion Laboratory is implementing the Planetary Aerobot Testbed to develop the technology needed to operate a robotic balloon aero-vehicle (Aerobot). This earth-based system would be the precursor for aerobots designed to explore Venus, Mars, Titan and other gaseous planetary bodies. The on-board perception system allows the aerobot to localize itself and navigate on a planet using information derived from a variety of celestial, inertial, ground-imaging, ranging, and radiometric sensors.

  18. Technical feasibility of an ROV with on-board power

    SciTech Connect

    Sayer, P.; Bo, L.

    1994-12-31

    An ROI`s electric power, control and communication signals are supplied from a surface ship or platform through an umbilical cable. Though cable design has evolved steadily, there are still severe limitations such as heavy weight and cost. It is well known that the drag imposed by the cable limits the operational range of the ROV in deep water. On the other hand, a cable-free AUV presents problems in control, communication and transmission of data. Therefore, an ROV with on-board and small-diameter cable could offer both a large operating range (footprint) and real-time control. This paper considers the feasibility of such an ROV with on-board power, namely a Self-Powered ROV (SPROV). The selection of possible power sources is first discussed before comparing the operational performance of an SPROV against a conventional ROV. It is demonstrated how an SPROV with a 5mm diameter tether offers a promising way forward, with on-board power of up to 40 kW over 24 hours. In water depths greater than 50m the reduced drag of the SPROV tether is very advantageous.

  19. Some design considerations for high-performance infrared imaging seeker

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Huang, Jianxiong

    2015-10-01

    In recent years, precision guided weapons play more and more important role in modern war. The development and applications of infrared imaging guidance technology have been paid more and more attention. And with the increasing of the complexity of mission and environment, precision guided weapons make stricter demand for infrared imaging seeker. The demands for infrared imaging seeker include: high detection sensitivity, large dynamic range, having better target recognition capability, having better anti-jamming capability and better environment adaptability. To meet the strict demand of weapon system, several important issues should be considered in high-performance infrared imaging seeker design. The mission, targets, environment of infrared imaging guided missile must be regarded. The tradeoff among performance goal, design parameters, infrared technology constraints and missile constraints should be considered. The optimized application of IRFPA and ATR in complicated environment should be concerned. In this paper, some design considerations for high-performance infrared imaging seeker were discussed.

  20. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  1. High-performance computing and communications

    SciTech Connect

    Stevens, R.

    1993-11-01

    This presentation has two parts. The first part discusses the US High-Performance Computing and Communications program -- its goals, funding, process, revisions, and research in high-performance computing systems, advanced software technology, and basic research and human resources. The second part of the presentation covers specific work conducted under this program at Argonne National Laboratory. Argonne`s efforts focus on computational science research, software tool development, and evaluation of experimental computer architectures. In addition, the author describes collaborative activities at Argonne in high-performance computing, including an Argonne/IBM project to evaluate and test IBM`s newest parallel computers and the Scalable I/O Initiative being spearheaded by the Concurrent Supercomputing Consortium.

  2. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  3. Advanced high-performance computer system architectures

    NASA Astrophysics Data System (ADS)

    Vinogradov, V. I.

    2007-02-01

    Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

  4. High performance computing at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  5. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

    2009-02-03

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are further improvements to aspects of the hybrid magnetic structure, including additional elements and for adapting the use of the hybrid magnetic structure for use in biotechnology and high throughput processes.

  6. High performance protection circuit for power electronics applications

    SciTech Connect

    Tudoran, Cristian D. Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  7. High Performance Computing with Harness over InfiniBand

    SciTech Connect

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  8. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  9. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  10. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  11. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  12. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  13. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  14. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  15. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  16. Teacher Accountability at High Performing Charter Schools

    ERIC Educational Resources Information Center

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  17. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  18. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  19. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  20. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  1. Massive Contingency Analysis with High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu; Nieplocha, Jaroslaw

    2009-07-26

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimates. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. Faster analysis of more cases is required to safely and reliably operate today’s power grids with less marginal and more intermittent renewable energy sources. Enabled by the latest development in the computer industry, high performance computing holds the promise of meet the need in the power industry. This paper investigates the potential of high performance computing for massive contingency analysis. The framework of "N-x" contingency analysis is established and computational load balancing schemes are studied and implemented with high performance computers. Case studies of massive 300,000-contingency-case analysis using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing and demonstrate the performance of the framework and computational load balancing schemes.

  2. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  3. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  4. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  5. MODIS On-Board Blackbody Function and Performance

    NASA Technical Reports Server (NTRS)

    Xiaoxiong, Xiong; Wenny, Brian N.; Wu, Aisheng; Barnes, William

    2009-01-01

    Two MODIS instruments are currently in orbit, making continuous global observations in visible to long-wave infrared wavelengths. Compared to heritage sensors, MODIS was built with an advanced set of on-board calibrators, providing sensor radiometric, spectral, and spatial calibration and characterization during on-orbit operation. For the thermal emissive bands (TEB) with wavelengths from 3.7 m to 14.4 m, a v-grooved blackbody (BB) is used as the primary calibration source. The BB temperature is accurately measured each scan (1.47s) using a set of 12 temperature sensors traceable to NIST temperature standards. The onboard BB is nominally operated at a fixed temperature, 290K for Terra MODIS and 285K for Aqua MODIS, to compute the TEB linear calibration coefficients. Periodically, its temperature is varied from 270K (instrument ambient) to 315K in order to evaluate and update the nonlinear calibration coefficients. This paper describes MODIS on-board BB functions with emphasis on on-orbit operation and performance. It examines the BB temperature uncertainties under different operational conditions and their impact on TEB calibration and data product quality. The temperature uniformity of the BB is also evaluated using TEB detector responses at different operating temperatures. On-orbit results demonstrate excellent short-term and long-term stability for both the Terra and Aqua MODIS on-board BB. The on-orbit BB temperature uncertainty is estimated to be 10mK for Terra MODIS at 290K and 5mK for Aqua MODIS at 285K, thus meeting the TEB design specifications. In addition, there has been no measurable BB temperature drift over the entire mission of both Terra and Aqua MODIS.

  6. On-board orbit determination for applications satellites

    NASA Technical Reports Server (NTRS)

    Morduch, G. E.; Lefler, J. G.; Argentiero, P. D.; Garza-Robles, R.

    1978-01-01

    An algorithm for satellite orbit determination is described which would be suitable for use with an on-board computer with limited core storage. The proposed filter is recursive on a pass-by-pass basis and features a fading memory to account for the effect of gravity field error. Only a single pass of Doppler data needs to be stored at any time and the data may be acquired from two reference beacons located within the Continental United States. The results of both simulated data and real data reductions demonstrate that the satellite's position can be determined to within one kilometer when a 4 x 4 recovery field is used.

  7. On-board data recorder for hard-target weapons

    SciTech Connect

    Niven, W.A.; Jaroska, M.F.

    1981-03-16

    The Naval Weapons Center has several hard target penetration weapons development programs in progress. One of the critical problem areas in these programs is the extreme difficulty of measuring acceleration-time data from penetration tests due to the hostile nature of the environment. The information is of vital importance in order to produce survivability design criteria for components expected to function in such severe environments. The development of a small, rugged, solid state on-board recorder to capture dynamic data for hard target penetration weapon testing is described.

  8. On-board data recorder for hard-target weapons

    SciTech Connect

    Niven, W.A.; Jaroska, M.F.

    1981-03-16

    The Naval Weapons Center is conducting several hard target penetration weapons development programs. One of the critical problem areas in these programs is the extreme difficulty, due to the hostile nature of the environment, of measuring acceleration-time data from penetration tests. The information is of vital importance in determining design criteria for survivability of components expected to function in such severe environments. This report describes the development of a small, rugged, solid-state on-board recorder to capture dynamic data for testing hard target penetration weapons.

  9. An implementable digital adaptive flight controller designed using stabilized single stage algorithms

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Alag, G.

    1975-01-01

    Simple mechanical linkages have not solved the many control problems associated with high performance aircraft maneuvering throughout a wide flight envelope. One procedure for retaining uniform handling qualities over such an envelope is to implement a digital adaptive controller. Towards such an implementation an explicit adaptive controller which makes direct use of on-line parameter identification, has been developed and applied to both linearized and nonlinear equations of motion for a typical fighter aircraft. This controller is composed of an on-line weighted least squares parameter identifier, a Kalman state filter, and a model following control law designed using single stage performance indices. Simulation experiments with realistic measurement noise indicate that the proposed adaptive system has the potential for on-board implementation.

  10. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  11. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  12. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  13. On-Orbit Performance of MODIS On-Board Calibrators

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Che, N.; Chiang, K.; Esposito, J.; Barnes, William; Guenther, B.; Zukor, Dorothy J. (Technical Monitor)

    2001-01-01

    The Terra MODIS (Moderate Resolution Imaging Spectroradiometer) was launched on December 18, 1999 and acquired the first scene data on February 24, 2000. It has 36 spectral bands covering spectral range from 0.41 to 14.2 microns and provides spatial resolutions of 250 (2 bands), 500 (5 bands), and 1000 m at Nadir. The instrument on-orbit calibration and characterization are determined and monitored through the use of a number of on-board calibrators (OBC). Radiometric calibration for the reflective solar bands (B1-B19, B26), from VIS (visible) to SWIR (short wavelength infrared) (0.41 to 2.1 microns), uses a Spectralon (tm) solar diffuser (SD) and a solar diffuser stability monitor (SDSM). For the thermal emissive bands (B20-B25, B27-B36), from MWIR (medium wavelength infrared) to LWIR (long wavelength infrared) (3.75 to 14.2 micron), a V-grooved flat panel blackbody is used. The instrument spectral for the VIS to SWIR bands and spatial co-registration characterizations for all bands are monitored on-orbit by the spectral radiometric calibration assembly (SRCA). In this report, we discuss the application and performance of the key MODIS on-board calibrators and their impacts on the instrument system calibration and characterization.

  14. On-board data management study for EOPAP

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1975-01-01

    The requirements, implementation techniques, and mission analysis associated with on-board data management for EOPAP were studied. SEASAT-A was used as a baseline, and the storage requirements, data rates, and information extraction requirements were investigated for each of the following proposed SEASAT sensors: a short pulse 13.9 GHz radar, a long pulse 13.9 GHz radar, a synthetic aperture radar, a multispectral passive microwave radiometer facility, and an infrared/visible very high resolution radiometer (VHRR). Rate distortion theory was applied to determine theoretical minimum data rates and compared with the rates required by practical techniques. It was concluded that practical techniques can be used which approach the theoretically optimum based upon an empirically determined source random process model. The results of the preceding investigations were used to recommend an on-board data management system for (1) data compression through information extraction, optimal noiseless coding, source coding with distortion, data buffering, and data selection under command or as a function of data activity, (2) for command handling, (3) for spacecraft operation and control, and (4) for experiment operation and monitoring.

  15. Expert system for on-board satellite scheduling and control

    NASA Technical Reports Server (NTRS)

    Barry, John M.; Sary, Charisse

    1988-01-01

    An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.

  16. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2010-01-01

    treatments and reconstructive surgeries . High performance computer simu- lation allows designers to try out numerous mechanical and material...investigating the effect of techniques for simplifying the calculations (sending the projectile through a pre-existing hole, for example) on the accuracy of...semiconductor particles are size-dependent. These properties, including yield strength and resistance to fatigue, are not well predicted by macroscopic

  17. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2008-01-01

    materials “from the atoms up” or to model biological systems at the molecular level. The speed and capacity of massively parallel computers are key...Streamlined, massively parallel high performance computing structural codes allow researchers to examine many relevant physical factors simultaneously...expenditure of energy, so that the drones can carry their load of sensors, communications devices, and fuel. AHPCRC researchers are using massively

  18. High-performance reactionless scan mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Ellen I.; Summers, Richard T.; Ostaszewski, Miroslaw A.

    1995-01-01

    A high-performance reactionless scan mirror mechanism was developed for space applications to provide thermal images of the Earth. The design incorporates a unique mechanical means of providing reactionless operation that also minimizes weight, mechanical resonance operation to minimize power, combined use of a single optical encoder to sense coarse and fine angular position, and a new kinematic mount of the mirror. A flex pivot hardware failure and current project status are discussed.

  19. High Performance Multiwall Carbon Nanotube Bolometers

    DTIC Science & Technology

    2010-10-21

    REPORT High performance multiwall carbon nanotube bolometers 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: High infrared bolometric photoresponse has...been observed in multiwall carbon nanotube MWCNT films at room temperature. The observed detectivity D in exceeding 3.3 106 cm Hz1/2 /W on MWCNT film...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS carbon nanotube, infrared detector, bolometer

  20. High Performance Split-Stirling Cooler Program

    DTIC Science & Technology

    1982-09-01

    7 SPLIT- STIRLING CYCLE CRYOCOOLER . ...... . . . . . 13 8 TEMPERATURE-SHOCK COMPARISON PERFORMANCE DATA, S/N 002 . . 23 9 TEMPERATURE-SHOCK...PERFORMANCE SPLIT- STIRLING "COOLER PROGRAM FINAL TECHNICAL REPORT "September 1982 Prepared for NIGHT VISION AND ELECTRO-OPTICS LABORATORI ES "Contract DAAK70...REPORT & P.Vt2OO COVERED HIGH PERFORMANCE SPLIT- STIRLING COOLER PROGRAM Final Technical Sept. 1979. - Sept. 1982 S. PERPORMING ORO. REPORT KUMMER

  1. Task parallelism and high-performance languages

    SciTech Connect

    Foster, I.

    1996-03-01

    The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism. The term task parallelism refers to the explicit creation of multiple threads of control, or tasks, which synchronize and communicate under programmer control. Task and data parallelism are complementary rather than competing programming models. While task parallelism is more general and can be used to implement algorithms that are not amenable to data-parallel solutions, many problems can benefit from a mixed approach, with for example a task-parallel coordination layer integrating multiple data-parallel computations. Other problems admit to both data- and task-parallel solutions, with the better solution depending on machine characteristics, compiler performance, or personal taste. For these reasons, we believe that a general-purpose high-performance language should integrate both task- and data-parallel constructs. The challenge is to do so in a way that provides the expressivity needed for applications, while preserving the flexibility and portability of a high-level language. In this paper, we examine and illustrate the considerations that motivate the use of task parallelism. We also describe one particular approach to task parallelism in Fortran, namely the Fortran M extensions. Finally, we contrast Fortran M with other proposed approaches and discuss the implications of this work for task parallelism and high-performance languages.

  2. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  3. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  4. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  5. On-board Payload Data Processing from Earth to Space Segment

    NASA Astrophysics Data System (ADS)

    Tragni, M.; Abbattista, C.; Amoruso, L.; Cinquepalmi, L.; Bgongiari, F.; Errico, W.

    2013-09-01

    GS algorithms to approach the problem in the Space scenario, i.e. for Synthetic Aperture Radar (SAR) application the typical focalization of the raw image needs to be improved to be effectively in this context. Many works are actually available on that, the authors have developed a specific ones for neural network algorithms. By the information directly "acquired" (so computed) on-board and without intervention of typical ground systems facilities, the spacecraft can take autonomously decision regarding a re-planning of acquisition for itself (at high performance modalities) or other platforms in constellation or affiliated reducing the time elapse as in the nowadays approach. For no EO missions it is big advantage to reduce the large round trip flight of transmission. In general the saving of resources is extensible to memory and RF transmission band resources, time reaction (like civil protection applications), etc. enlarging the flexibility of missions and improving the final results. SpacePDP main HW and SW characteristics: • Compactness: size and weight of each module are fitted in a Eurocard 3U 8HP format with «Inter-Board» connection through cPCI peripheral bus. • Modularity: the Payload is usually composed by several sub-systems. • Flexibility: coprocessor FPGA, on-board memory and support avionic protocols are flexible, allowing different modules customization according to mission needs • Completeness: the two core boards (CPU and Companion) are enough to obtain a first complete payload data processing system in a basic configuration. • Integrability: The payload data processing system is open to accept custom modules to be connected on its open peripheral bus. • CPU HW module (one or more) based on a RISC processor (LEON2FT, a SPARC V8 architecture, 80Mips @100MHz on ASIC ATMEL AT697F) • DSP HW module (optional with more instances) based on a FPGA dedicated architecture to ensure an effective multitasking control and to offer high numerical

  6. Reduced Complexity High Performance Array Processing

    DTIC Science & Technology

    1998-01-01

    constrained beamforming,” IEEE Transactions on Aerospace and Electronic Systems, January 1982. [5] J . Scott Goldstein and Irving S. Reed, “Theory of...partially adaptive radar,” IEEE Transactions on Aerospace and Electronic Systems, October 1997. [6] J . Scott Goldstein, Irving S. Reed, and John A...Computers. [7] Barry D. Van Veen, “Eigenstructure-based partially adaptive array design,” IEEE Transactions on Antennas and Propagation, March 1988. [8] J

  7. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  8. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  9. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  10. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  11. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  12. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  13. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  14. High performance forward swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Koenig, David G. (Inventor); Aoyagi, Kiyoshi (Inventor); Dudley, Michael R. (Inventor); Schmidt, Susan B. (Inventor)

    1988-01-01

    A high performance aircraft capable of subsonic, transonic and supersonic speeds employs a forward swept wing planform and at least one first and second solution ejector located on the inboard section of the wing. A high degree of flow control on the inboard sections of the wing is achieved along with improved maneuverability and control of pitch, roll and yaw. Lift loss is delayed to higher angles of attack than in conventional aircraft. In one embodiment the ejectors may be advantageously positioned spanwise on the wing while the ductwork is kept to a minimum.

  15. High performance thyratron driver with low jitter.

    PubMed

    Verma, Rishi; Lee, P; Springham, S V; Tan, T L; Rawat, R S

    2007-08-01

    We report the design and development of insulated gate bipolar junction transistor based high performance driver for operating thyratrons in grounded grid mode. With careful design, the driver meets the specification of trigger output pulse rise time less than 30 ns, jitter less than +/-1 ns, and time delay less than 160 ns. It produces a -600 V pulse of 500 ns duration (full width at half maximum) at repetition rate ranging from 1 Hz to 1.14 kHz. The developed module also facilitates heating and biasing units along with protection circuitry in one complete package.

  16. High Performance Polymer Memory and Its Formation

    DTIC Science & Technology

    2007-04-26

    Std. Z39.18 Final Report to AFOSR High Performance Polymer Memory Device and Its Formation Fund No.: FA9550-04-1-0215 Prepared by Prof. Yang Yang...polystyrene (PS). The metal nanoparticles were prepared by the two-phase 10-5 (b) 10𔄁Polymer film 1a CC , 10, Glass 1 -2 -1 0 1 2 3 4 5 Bias (V) Fig. I...such as copper pthalocyanine (CuPc), 24 ൢ zinc pthalocyanine (ZnPc), 27󈧠 tetracene, 29 and pentacene 30 have been used as donors combined with

  17. Advanced On-Board Processor (AOP). [for future spacecraft applications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Advanced On-board Processor the (AOP) uses large scale integration throughout and is the most advanced space qualified computer of its class in existence today. It was designed to satisfy most spacecraft requirements which are anticipated over the next several years. The AOP design utilizes custom metallized multigate arrays (CMMA) which have been designed specifically for this computer. This approach provides the most efficient use of circuits, reduces volume, weight, assembly costs and provides for a significant increase in reliability by the significant reduction in conventional circuit interconnections. The required 69 CMMA packages are assembled on a single multilayer printed circuit board which together with associated connectors constitutes the complete AOP. This approach also reduces conventional interconnections thus further reducing weight, volume and assembly costs.

  18. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  19. TSP-Based Generic Payload On-Board Software

    NASA Astrophysics Data System (ADS)

    Arberet, P.; Metge, J.-J.; Gras, O.; Crespo, A.

    2009-05-01

    The paper address the contect and rationale for deciding to develop a TSP-based solution for payload on-board software, highly generic and reusable, project named LVCUGEN. Then it describes the key design issues and the associated architectual achievements obtained at the end of development phase of LVCUGEN. It provides some inputs on the way to instantiate the developed framework in the scope of deployment of the solution on a target-project. Last, the paper presents the status of the project and the forthcoming activities, also open issues, still to be performed. Some perspectives are provided in particular the selection of the first space program targeted for deployment of the solution.

  20. Liquid transfer demonstration on board Apollo 14 during transearth coast

    NASA Technical Reports Server (NTRS)

    Abdalla, K. L.; Otto, E. W.; Symons, E. P.; Petrash, D. A.

    1971-01-01

    The transfer of liquid from one container to another in a weightless environment was demonstrated by the crew of Apollo 14. A scale-model liquid-transfer system was used on board the spacecraft during the transearth coast period. The liquid transfer unit consisted of a surface tension baffled tank system containing two baffle designs. Liquid was transferred between tanks with a hand pump operated by the astronaut. The results showed that liquid was efficiently transferred to and from either baffled tank to within two percent of the design value residual liquid without reaching gas ingestion. The liquid-vapor interface in the receiver tank was positioned successfully with the gas located at the vent.

  1. On-board attitude determination for the Topex satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Ha, K.; Welch, R. V.; Kia, T.

    1989-01-01

    This paper presents an overall technical description of the on-board attitude determination system for The Ocean Topography Experiment (Topex) satellite. The stellar-inertial attitude determination system being designed for the Topex satellite utilizes data from a three-axis NASA Standard DRIRU-II as well as data from an Advanced Star Tracer (ASTRA) and a Digital Fine Sun Sensor (DFSS). This system is a modified version of the baseline Multimission Modular Spacecraft (MMS) concept used on the Landsat missions. Extensive simulation and analysis of the MMS attitude determination approach was performed to verify suitability for the Topex application. The modifications to this baseline attitude determination scheme were identified to satisfy the unique Topex mission requirements.

  2. Development of the On-board Aircraft Network

    NASA Technical Reports Server (NTRS)

    Green, Bryan D. W.; Mezu, Okechukwu A.

    2004-01-01

    Phase II will focus on the development of the on-board aircraft networking portion of the testbed which includes the subnet and router configuration and investigation of QoS issues. This implementation of the testbed will consist of a workstation, which functions as the end system, connected to a router. The router will service two subnets that provide data to the cockpit and the passenger cabin. During the testing, data will be transferred between the end systems and those on both subnets. QoS issues will be identified and a preliminary scheme will be developed. The router will be configured for the testbed network and initial security studies will be initiated. In addition, architecture studies of both the SITA and Immarsat networks will be conducted.

  3. DAMPE silicon tracker on-board data compression algorithm

    NASA Astrophysics Data System (ADS)

    Dong, Yi-Fan; Zhang, Fei; Qiao, Rui; Peng, Wen-Xi; Fan, Rui-Rui; Gong, Ke; Wu, Di; Wang, Huan-Yu

    2015-11-01

    The Dark Matter Particle Explorer (DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic ray detection. The silicon tracker (STK) is a subdetector of the DAMPE payload. It has excellent position resolution (readout pitch of 242 μm), and measures the incident direction of particles as well as charge. The STK consists of 12 layers of Silicon Micro-strip Detector (SMD), equivalent to a total silicon area of 6.5 m2. The total number of readout channels of the STK is 73728, which leads to a huge amount of raw data to be processed. In this paper, we focus on the on-board data compression algorithm and procedure in the STK, and show the results of initial verification by cosmic-ray measurements. Supported by Strategic Priority Research Program on Space Science of Chinese Academy of Sciences (XDA040402) and National Natural Science Foundation of China (1111403027)

  4. The Dust Impact Monitor on board the Rosetta Lander PHILAE

    NASA Astrophysics Data System (ADS)

    Krueger, Harald; Seidensticker, Klaus; Apathy, Istvan; Fischer, Hans-Herbert; Hetzel, Mareike; Hirn, Attila; Loose, Alexander; Peter, Attila; Podolak, Morris

    The Rosetta spacecraft -launched in 2004 -carries the lander spacecraft PHILAE on board which is supposed to land on the nucleus of comet 67P/Churyumov-Gerasimenko in 2014. The instrument package SESAME is one of the scientific instruments on board PHILAE. The main objectives of SESAME are measurements of the mechanical and electrical properties of the cometary surface and sub-surface material as well as measurements of ice and dust particles emitted from the nucleus. The Dust Impact Monitor (DIM) is a subinstrument of SESAME and is mounted on PHILAE's balcony. The DIM sensor consists of three piezoelectric detectors, each one mounted on the outer side of a cube facing in orthogonal directions (the direction normal to the nucleus surface and two horizontal directions) so that information on the impact direction of the particles can be obtained. The total sensor area of all three detectors is approximately 70cm2 . DIM measures impacts of sub-millimeter and millimeter sized particles. Ice and dust particles emitted from the nucleus couple to the cometary gas flow and are accelerated away from the nucleus surface. Depending on particle size, a fraction of the emitted grains falls back to the surface after some time due to gravity while the rest is being ejected into the cometary coma. DIM will be able to detect these backfalling particles (with its sensor pointing normal to the nucleus surface) as well as grains leaving the nucleus on direct trajectories (with the two sensors facing in horizontal directions). The DIM instrument will measure dust fluxes, impact directions as well speed and size of the impacting particles. We are performing a laboratory calibration program by simulating particle impacts on the sensors and we are presenting our preliminary results from these laboratory experiments.

  5. Challenge of lightning detection with LAC on board Akatsuki spacecraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Yukihiro; Sato, Mitsutero; Imai, Masataka; Yair, Yoav; Fischer, Georg; Aplin, Karen

    2016-04-01

    Even after extensive investigations with spacecraft and ground-based observations, there is still no consensus on the existence of lightning in Venus. It has been reported that the magnetometer on board Venus Express detected whistler mode waves whose source could be lightning discharge occurring well below the spacecraft. On the other hand, with an infrared sensor, VIRTIS of Venus Express, does not show the positive indication of lightning flashes. In order to identify the optical flashes caused by electrical discharge in the atmosphere of Venus, at least, with an optical intensity of 1/10 of the average lightning in the Earth, we built a high-speed optical detector, LAC (Lightning and Airglow Camera), on board Akatsuki spacecraft. The unique performance of the LAC compared to other instruments is the high-speed sampling rate at 32 us interval for all 32 pixels, enabling us to distinguish the optical lightning flash from other pulsing noises. Though, unfortunately, the first attempt of the insertion of Akatsuki into the orbit around Venus failed in December 2010, the second one carried out in December 7 in 2015 was quite successful. We checked out the condition of the LAC on January 5, 2016, and it is healthy as in 2010. Due to some elongated orbit than that planned originally, we have umbra for ~30 min to observe the lightning flash in the night side of Venus every ~10 days, starting on April 2016. Here we would report the instrumental status of LAC and the preliminary results of the first attempt to observe optical lightning emissions.

  6. Corporate sponsored education initiatives on board the ISS

    NASA Astrophysics Data System (ADS)

    Durham, Ian T.; Durham, Alyson S.; Pawelczyk, James A.; Brod, Lawrence B.; Durham, Thomas F.

    1999-01-01

    This paper proposes the creation of a corporate sponsored ``Lecture from Space'' program on board the International Space Station (ISS) with funding coming from a host of new technology and marketing spin-offs. This program would meld existing education initiatives in NASA with new corporate marketing techniques. Astronauts in residence on board the ISS would conduct short ten to fifteen minute live presentations and/or conduct interactive discussions carried out by a teacher in the classroom. This concept is similar to a program already carried out during the Neurolab mission on Shuttle flight STS-90. Building on that concept, the interactive simulcasts would be broadcast over the Internet and linked directly to computers and televisions in classrooms worldwide. In addition to the live broadcasts, educational programs and demonstrations can be recorded in space, and marketed and sold for inclusion in television programs, computer software, and other forms of media. Programs can be distributed directly into classrooms as an additional presentation supplement, as well as over the Internet or through cable and broadcast television, similar to the Canadian Discovery Channel's broadcasts of the Neurolab mission. Successful marketing and advertisement can eventually lead to the creation of an entirely new, privately run cottage industry involving the distribution and sale of educationally related material associated with the ISS that would have the potential to become truly global in scope. By targeting areas of expertise and research interest in microgravity, a large curriculum could be developed using space exploration as a unifying theme. Expansion of this concept could enhance objectives already initiated through the International Space University to include elementary and secondary school students. The ultimate goal would be to stimulate interest in space and space related sciences in today's youth through creative educational marketing initiatives while at the

  7. Plasma wave observation using waveform capture in the Lunar Radar Sounder on board the SELENE spacecraft

    NASA Astrophysics Data System (ADS)

    Kasahara, Yoshiya; Goto, Yoshitaka; Hashimoto, Kozo; Imachi, Tomohiko; Kumamoto, Atsushi; Ono, Takayuki; Matsumoto, Hiroshi

    2008-04-01

    The waveform capture (WFC) instrument is one of the subsystems of the Lunar Radar Sounder (LRS) on board the SELENE spacecraft. By taking advantage of a moon orbiter, the WFC is expected to measure plasma waves and radio emissions that are generated around the moon and/or that originated from the sun and from the earth and other planets. It is a high-performance and multifunctional software receiver in which most functions are realized by the onboard software implemented in a digital signal processor (DSP). The WFC consists of a fast-sweep frequency analyzer (WFC-H) covering the frequency range from 1 kHz to 1 MHz and a waveform receiver (WFC-L) in the frequency range from 10 Hz to 100 kHz. By introducing the hybrid IC called PDC in the WFC-H, we created a spectral analyzer with a very high time and frequency resolution. In addition, new techniques such as digital filtering, automatic filter selection, and data compression are implemented for data processing of the WFC-L to extract the important data adequately under the severe restriction of total amount of telemetry data. Because of the flexibility of the instruments, various kinds of observation modes can be achieved, and we expect the WFC to generate many interesting data.

  8. High-Speed On-Board Data Processing for Science Instruments: HOPS

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  9. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  10. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  11. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  12. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  13. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  14. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  15. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  16. Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot

    PubMed Central

    Vanhoutte, Erik; Mafrica, Stefano; Ruffier, Franck; Bootsma, Reinoud J.; Serres, Julien

    2017-01-01

    For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M2APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6×10−7 to 1.6×10−2 W·cm−2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M2APix sensor. While both algorithms adequately measured optical flow between 25 ∘/s and 1000 ∘/s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. PMID:28287484

  17. Optics of high-performance electron microscopes.

    PubMed

    Rose, H H

    2008-01-01

    During recent years, the theory of charged particle optics together with advances in fabrication tolerances and experimental techniques has lead to very significant advances in high-performance electron microscopes. Here, we will describe which theoretical tools, inventions and designs have driven this development. We cover the basic theory of higher-order electron optics and of image formation in electron microscopes. This leads to a description of different methods to correct aberrations by multipole fields and to a discussion of the most advanced design that take advantage of these techniques. The theory of electron mirrors is developed and it is shown how this can be used to correct aberrations and to design energy filters. Finally, different types of energy filters are described.

  18. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  19. A high performance architecture for prolog

    SciTech Connect

    Dobry, T.

    1987-01-01

    Artificial Intelligence is entering the mainstream of computer applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the processing power of conventional architecture. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an architecture specialized for Prolog can achieve a ten-fold improvement in performance over conventional general-purpose architecture, and presents such an architecture for high performance execution of Prolog programs. The architecture is based on the abstract machine description known as the Warren Abstract Machine (WAM). The execution model of the WAM is described and extended to provide a complete Instruction Set Architecture (ISA) for Prolog known as the PLM. The ISA is then realized in a microarchitecture and finally in a hardware design.

  20. High-performance architecture for Prolog

    SciTech Connect

    Dobry, T.P.

    1987-01-01

    Artificial intelligence is entering the mainstream of computer applications and, as techniques are developed and integrated into a wide variety of areas, they are beginning to tax the processing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an architecture specialized for Prolog can achieve a tenfold improvement in performance over conventional, general-purpose architectures. This dissertation presents such an architecture for high performance execution of Prolog programs. The architecture is based on the abstract machine description introduced by David H.D. Warren known as the Warren Abstract Machine (WAM). The execution model of the WAM is described and extended to provide a complete Instruction Set Architecture (ISA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design.

  1. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  2. High-performance nanostructured MR contrast probes

    PubMed Central

    Hu, Fengqin; Joshi, Hrushikesh M.; Dravid, Vinayak P.; Meade, Thomas J.

    2011-01-01

    Magnetic resonance imaging (MRI) has become a powerful technique in biological molecular imaging and clinical diagnosis. With the rapid progress in nanoscale science and technology, nanostructure-based MR contrast agents are undergoing rapid development. This is in part due to the tuneable magnetic and cellular uptake properties, large surface area for conjugation and favourable biodistribution. In this review, we describe our recent progress in the development of high-performance nanostructured MR contrast agents. Specifically, we report on Gd-enriched nanostructured probes that exhibit T1 MR contrast and superparamagnetic Fe3O4 and CoFe2O4 nanostructures that display T2 MR contrast enhancement. The effects of nanostructure size, shape, assembly and surface modification on relaxivity are described. The potential of these contrast agents for in vitro and in vivo MR imaging with respect to colloidal stability under physiological conditions, biocompatibility, and surface functionality are also evaluated. PMID:20694208

  3. Optics of high-performance electron microscopes*

    PubMed Central

    Rose, H H

    2008-01-01

    During recent years, the theory of charged particle optics together with advances in fabrication tolerances and experimental techniques has lead to very significant advances in high-performance electron microscopes. Here, we will describe which theoretical tools, inventions and designs have driven this development. We cover the basic theory of higher-order electron optics and of image formation in electron microscopes. This leads to a description of different methods to correct aberrations by multipole fields and to a discussion of the most advanced design that take advantage of these techniques. The theory of electron mirrors is developed and it is shown how this can be used to correct aberrations and to design energy filters. Finally, different types of energy filters are described. PMID:27877933

  4. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  5. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  6. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  7. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  8. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  9. High-performance computing in accelerating structure design and analysis

    NASA Astrophysics Data System (ADS)

    Li, Zenghai; Folwell, Nathan; Ge, Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-03-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R&D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields).

  10. High-Performance Computing in Accelerating Structure Design And Analysis

    SciTech Connect

    Li, Z.H.; Folwell, N.; Ge, Li-Xin; Guetz, A.; Ivanov, V.; Kowalski, M.; Lee, L.Q.; Ng, C.K.; Schussman, G.; Stingelin, L.; Uplenchwar, R.; Wolf, M.; Xiao, L.L.; Ko, K.; /SLAC /PSI, Villigen /Illinois U., Urbana

    2006-06-27

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R&D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long range wakefields).

  11. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  12. Creating high performance buildings: Lower energy, better comfort

    NASA Astrophysics Data System (ADS)

    Brager, Gail; Arens, Edward

    2015-03-01

    Buildings play a critical role in the challenge of mitigating and adapting to climate change. It is estimated that buildings contribute 39% of the total U.S. greenhouse gas (GHG) emissions [1] primarily due to their operational energy use, and about 80% of this building energy use is for heating, cooling, ventilating, and lighting. An important premise of this paper is about the connection between energy and comfort. They are inseparable when one talks about high performance buildings. Worldwide data suggests that we are significantly overcooling buildings in the summer, resulting in increased energy use and problems with thermal comfort. In contrast, in naturally ventilated buildings without mechanical cooling, people are comfortable in much warmer temperatures due to shifting expectations and preferences as a result of occupants having a greater degree of personal control over their thermal environment; they have also become more accustomed to variable conditions that closely reflect the natural rhythms of outdoor climate patterns. This has resulted in an adaptive comfort zone that offers significant potential for encouraging naturally ventilated buildings to improve both energy use and comfort. Research on other forms for providing individualized control through low-energy personal comfort systems (desktop fans, foot warmed, and heated and cooled chairs) have also demonstrated enormous potential for improving both energy and comfort performance. Studies have demonstrated high levels of comfort with these systems while ambient temperatures ranged from 64-84°F. Energy and indoor environmental quality are inextricably linked, and must both be important goals of a high performance building.

  13. Creating high performance buildings: Lower energy, better comfort

    SciTech Connect

    Brager, Gail; Arens, Edward

    2015-03-30

    Buildings play a critical role in the challenge of mitigating and adapting to climate change. It is estimated that buildings contribute 39% of the total U.S. greenhouse gas (GHG) emissions [1] primarily due to their operational energy use, and about 80% of this building energy use is for heating, cooling, ventilating, and lighting. An important premise of this paper is about the connection between energy and comfort. They are inseparable when one talks about high performance buildings. Worldwide data suggests that we are significantly overcooling buildings in the summer, resulting in increased energy use and problems with thermal comfort. In contrast, in naturally ventilated buildings without mechanical cooling, people are comfortable in much warmer temperatures due to shifting expectations and preferences as a result of occupants having a greater degree of personal control over their thermal environment; they have also become more accustomed to variable conditions that closely reflect the natural rhythms of outdoor climate patterns. This has resulted in an adaptive comfort zone that offers significant potential for encouraging naturally ventilated buildings to improve both energy use and comfort. Research on other forms for providing individualized control through low-energy personal comfort systems (desktop fans, foot warmed, and heated and cooled chairs) have also demonstrated enormous potential for improving both energy and comfort performance. Studies have demonstrated high levels of comfort with these systems while ambient temperatures ranged from 64–84°F. Energy and indoor environmental quality are inextricably linked, and must both be important goals of a high performance building.

  14. On-board Science Understanding: NASA Ames' Efforts

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Cheeseman, Peter; Gulick, Virginia; Wolf, David; Gazis, Paul; Benedix, Gretchen; Buntine, Wray; Glymour, Clark; Pedersen, Liam; Ruzon, Mark

    1998-01-01

    In the near future NASA intends to explore various regions of our solar system using robotic devices such as rovers, spacecraft, airplanes, and/or balloons. Such platforms will likely carry imaging devices, and a variety of analytical instruments intended to evaluate the chemical and mineralogical nature of the environment(s) that they encounter. Historically, mission operations have involved: (1) return of scientific data from the craft; (2) evaluation of the data by space scientists; (3) recommendations of the scientists regarding future mission activity; (4) commands for achieving these activities being transmitted to the craft; and (5) the activity being undertaken. This cycle is then repeated for the duration of the mission with command opportunities once or perhaps twice per day. In a rapidly changing environment, such as might be encountered by a rover traversing hundreds of meters a day or a spacecraft encountering an asteroid, this historical cycle is not amenable to rapid long range traverses, discovery of novelty, or rapid response to any unexpected situations. In addition to real-time response issues, the nature of imaging and/or spectroscopic devices are such that tremendous data volumes can be acquired, for example during a traverse. However, such data volumes can rapidly exceed on-board memory capabilities prior to the ability to transmit it to Earth. Additionally, the necessary communication band-widths are restrictive enough so that only a small portion of these data can actually be returned to Earth. Such scenarios clearly require the enabling of some crucial decisions to be made on-board by these robotic explorers. These decisions transcend the electromechanical control, health, and navigation issues associated with robotic operations. Instead they focus upon a long term goal of automating scientific discovery based upon data returned by sensors of the robot craft. Such an approach would eventually enable it to understand what is interesting

  15. A quality assurance program for the on-board imagers.

    PubMed

    Yoo, Sua; Kim, Gwe-Ya; Hammoud, Rabih; Elder, Eric; Pawlicki, Todd; Guan, Huaiqun; Fox, Timothy; Luxton, Gary; Yin, Fang-Fang; Munro, Peter

    2006-11-01

    To develop a quality assurance (QA) program for the On-Board Imager (OBI) system and to summarize the results of these QA tests over extended periods from multiple institutions. Both the radiographic and cone-beam computed tomography (CBCT) mode of operation have been evaluated. The QA programs from four institutions have been combined to generate a series of tests for evaluating the performance of the On-Board Imager. The combined QA program consists of three parts: (1) safety and functionality, (2) geometry, and (3) image quality. Safety and functionality tests evaluate the functionality of safety features and the clinical operation of the entire system during the tube warm-up. Geometry QA verifies the geometric accuracy and stability of the OBI/CBCT hardware/software. Image quality QA monitors spatial resolution and contrast sensitivity of the radiographic images. Image quality QA for CBCT includes tests for Hounsfield Unit (HU) linearity, HU uniformity, spatial linearity, and scan slice geometry, in addition. All safety and functionality tests passed on a daily basis. The average accuracy of the OBI isocenter was better than 1.5 mm with a range of variation of less than 1 mm over 8 months. The average accuracy of arm positions in the mechanical geometry QA was better than 1 mm, with a range of variation of less than 1 mm over 8 months. Measurements of other geometry QA tests showed stable results within tolerance throughout the test periods. Radiographic contrast sensitivity ranged between 2.2% and 3.2% and spatial resolution ranged between 1.25 and 1.6 lp/mm. Over four months the CBCT images showed stable spatial linearity, scan slice geometry, contrast resolution (1%; <7 mm disk) and spatial resolution (>6 lp/cm). The HU linearity was within +/-40 HU for all measurements. By combining test methods from multiple institutions, we have developed a comprehensive, yet practical, set of QA tests for the OBI system. Use of the tests over extended periods show that

  16. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  17. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  18. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  19. The curing of high-performance concrete

    NASA Astrophysics Data System (ADS)

    Meeks, Kenneth Wayne

    This dissertation describes the latest information, technology, and research on the curing of high performance concrete (HPC). Expanded somewhat beyond the scope of HPC, it examines the current body of knowledge on the effects of various curing regimes on concrete. The significance and importance of curing are discussed as well as the various definitions of HPC. The current curing requirements, standards, and criteria as proposed by ACI, as well as those of other countries, are reviewed and discussed. The current prescriptive curing requirements may not be applicable to high performance concrete. The research program reported in this dissertation looked at one approach to development of curing criteria for this relatively new class of concrete. The program applies some of the basic concepts of the methodology developed by the German researcher, H. K. Hilsdorf, to the curing of HPC with the objective to determine minimum curing durations for adequate strength development. The approach is to determine what fraction of the standard-cured 28-day strength has to be attained at the end of the curing period to assure that the design strength is attained in the interior of the member. An innovative direct tension test was developed to measure the strength at specific depths from the drying surface of small mortar cylinders (50 x 127 mm (2 x 5 in.)). Two mortar mixtures were investigated, w/c = 0.30 and w/c = 0.45, and three different moist curing regimes, 1-day, 3-day, and 7-day. Specimens were stored in two environmental chambers at 25sp°C, 50% RH; and 25sp°C, 70% RH, until testing at the age of 28 days. Direct tensile tests were conducted using steel disks epoxied to the ends of the specimens. Also, the penetration of the drying front was calculated from the drying data using porosity and degree of hydration relationships. The major observation from these tests was that adequate strength is attained in both mortar mixtures with only one day of moist curing. The drying

  20. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  1. Medical emergencies on board commercial airlines: is documentation as expected?

    PubMed Central

    2012-01-01

    Introduction The purpose of this study was to perform a descriptive, content-based analysis on the different forms of documentation for in-flight medical emergencies that are currently provided in the emergency medical kits on board commercial airlines. Methods Passenger airlines in the World Airline Directory were contacted between March and May 2011. For each participating airline, sample in-flight medical emergency documentation forms were obtained. All items in the sample documentation forms were subjected to a descriptive analysis and compared to a sample "medical incident report" form published by the International Air Transport Association (IATA). Results A total of 1,318 airlines were contacted. Ten airlines agreed to participate in the study and provided a copy of their documentation forms. A descriptive analysis revealed a total of 199 different items, which were summarized into five sub-categories: non-medical data (63), signs and symptoms (68), diagnosis (26), treatment (22) and outcome (20). Conclusions The data in this study illustrate a large variation in the documentation of in-flight medical emergencies by different airlines. A higher degree of standardization is preferable to increase the data quality in epidemiologic aeromedical research in the future. PMID:22397530

  2. Developing safety signs for children on board trains.

    PubMed

    Waterson, Patrick; Pilcher, Cara; Evans, Sian; Moore, Jill

    2012-01-01

    Every year a number of young children are injured as a result of accidents that occur on board trains in Great Britain. These accidents range from being caught in internal doors, through to injuries caused by using seats. We describe our efforts to design a new set of safety signs in order to help prevent the occurrence of these types of accident. The research was funded under a Rail Safety and Standards Board (RSSB) managed UK Department for Transport research programme and was carried out in collaboration with Loughborough University. The study involved analysis of industry accident incidence data and running a set of classroom discussions with young school children (aged 5-10, n=210). The classroom discussions initially involved showing them examples of a new design prototype sign alongside existing train signs and gathering the requirements for new designs. A second set of classroom discussions with these children was used to evaluate the new signs based on the outcomes from earlier discussions. We describe our findings alongside a set of outline guidelines for the design of safety signs for young children. A final section considers the main methodological and other lessons learnt from the study, alongside study limitations and possibilities for future research.

  3. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  4. On-Board Supercapacitor Energy Storage: Sizing Considerations

    NASA Astrophysics Data System (ADS)

    Latkovskis, L.; Sirmelis, U.; Grigans, L.

    2012-01-01

    The paper considers the problem of choosing the optimum size for on-board energy storage system (ESS) based on supercapacitors (SCs) taking into account both the braking energy and the braking power of an electrical vehicle. The authors have derived equations for calculation of the minimum SC number in the bank and the optimum depth of its discharge. The theory is exemplified by the Škoda 24Tr trolleybus. Besides, by simulation of the ESS mathematical model, the dependence of the saved braking energy vs. SC number at the optimum discharge depth has been studied. The research shows that a reduced number of SCs may be used as compromise solution between the ESS efficiency and its cost. It was found that in most cases the optimum discharge depth is much higher than 0.5 - the value recommended by SC manufacturers and often met in literature.

  5. WFI electronics and on-board data processing

    NASA Astrophysics Data System (ADS)

    Plattner, Markus; Albrecht, Sebastian; Bayer, Jörg; Brandt, Soeren; Drumm, Paul; Hälker, Olaf; Kerschbaum, Franz; Koch, Anna; Kuvvetli, Irfan; Meidinger, Norbert; Ott, Sabine; Ottensamer, Roland; Reiffers, Jonas; Schanz, Thomas; Skup, Konrad; Steller, Manfred; Tenzer, Chris; Thomas, Chris

    2016-07-01

    The Wide Field Imager is one of two instruments on-board the future ATHENA X-ray observatory. Its main scientific objective is to perform a sky survey in the energy range of 0.2 keV up to 15 keV with an end-of-life spectral resolution (FWHM) better than 170 eV (at 7 keV) and a frame rate of at least 200 Hz. The field of view will be 40 arcmin squared wherefore a focal plane array with 4 large sensors each with a size of 512 times 512 pixels will be developed. Additionally, a fast detector with a size of 64 times 64 pixels and a frame rate of 12.5 kHz will be implemented in order to enhance the instrument with high count rate detection of bright sources. The data processing electronics within the WFI instrument is distributed over several subsystems: DEPFET sensors sensitive in the x-ray energy regime and front-end electronics are located inside the Camera Head. Data pre-processing inside the Detector Electronics will be performed in an FPGA-based frame-processor. FPGA external memory will be used to store offset and noise maps wherefore memory controllers have to be developed. Fast read and write access to the maps combined with robustness against radiation damage (e.g. bit-flips) has to be ensured by the frame-processor design.

  6. The ALTCRISS Project On Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Casolino, M.; Altamura, F.; Minori, M.; Picozza, P.; Fuglesang, C.; Galper, A.; Popov, A.; Benghin, V.; Petrov, V. M.

    2006-01-01

    The Altcriss project aims to perform a long term survey of the radiation environment on board the International Space Station. Measurements are being performed with active and passive devices in different locations and orientations of the Russian segment of the station. The goal is perform a detailed evaluation of the differences in particle fluence and nuclear composition due to different shielding material and attitude of the station. The Sileye-3/Alteino detector is used to identify nuclei up to Iron in the energy range above approximately equal to 60 MeV/n; a number of passive dosimeters (TLDs, CR39) are also placed in the same location of Sileye-3 detector. Polyethylene shielding is periodically interposed in front of the detectors to evaluate the effectiveness of shielding on the nuclear component of the cosmic radiation. The project was submitted to ESA in reply to the AO the Life and Physical Science of 2004 and was begun in December 2005. Dosimeters and data cards are rotated every six months: up to now three launches of dosimeters and data cards have been performed and have been returned with the end expedition 12 and 13.

  7. The design of high-performance gliders

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Heuermann, V.

    1985-01-01

    A high-performance glider is defined as a glider which has been designed to carry the pilot in a minimum of time a given distance, taking into account conditions which are as conveniently as possible. The present investigation has the objective to show approaches for enhancing the cross-country flight cruising speed, giving attention to the difficulties which the design engineer will have to overcome. The characteristics of the cross-country flight and their relation to the cruising speed are discussed, and a description is provided of mathematical expressions concerning the cruising speed, the sinking speed, and the optimum gliding speed. The effect of aspect ratio and wing loading on the cruising speed is illustrated with the aid of a graph. Trends in glider development are explored, taking into consideration the design of laminar profiles, the reduction of profile-related drag by plain flaps, and the variation of wing loading during the flight. A number of suggestions are made for obtaining gliders with improved performance.

  8. High performance vapour-cell frequency standards

    NASA Astrophysics Data System (ADS)

    Gharavipour, M.; Affolderbach, C.; Kang, S.; Bandi, T.; Gruet, F.; Pellaton, M.; Mileti, G.

    2016-06-01

    We report our investigations on a compact high-performance rubidium (Rb) vapour-cell clock based on microwave-optical double-resonance (DR). These studies are done in both DR continuous-wave (CW) and Ramsey schemes using the same Physics Package (PP), with the same Rb vapour cell and a magnetron-type cavity with only 45 cm3 external volume. In the CW-DR scheme, we demonstrate a DR signal with a contrast of 26% and a linewidth of 334 Hz; in Ramsey-DR mode Ramsey signals with higher contrast up to 35% and a linewidth of 160 Hz have been demonstrated. Short-term stabilities of 1.4×10-13 τ-1/2 and 2.4×10-13 τ-1/2 are measured for CW-DR and Ramsey-DR schemes, respectively. In the Ramsey-DR operation, thanks to the separation of light and microwave interactions in time, the light-shift effect has been suppressed which allows improving the long-term clock stability as compared to CW-DR operation. Implementations in miniature atomic clocks are considered.

  9. High performance techniques for space mission scheduling

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.

    1994-01-01

    In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.

  10. Development of high performance BWR spacer

    SciTech Connect

    Morooka, Shinichi; Shirakawa, Kenetu; Mitutake, Tohru; Yamamoto, Yasushi; Yano, Takashi; Kimura, Jiro

    1996-07-01

    The spacer has a significant effect on thermal hydraulic performance of BWR fuel assembly. The purpose of this study is to develop a new BWR spacer with high critical power and low pressure drop performance. The developed high performance spacer is a ferrule type spacer with twisted tape and improved flow tab. This spacer is called CYCLONE spacer. Critical power and pressure drop have been measured at BEST (BWR Experimental Loop for Stability and Transient test) of Toshiba Corporation. The test bundle consists of electrically heated rods in a 4x4 array configuration. These heater rods are indirectly heated. The heated length and outer diameter of the heater rod, as well as the number and the axial locations of the spacers, are the same as for those for a BWR fuel assembly. The axial power shape is stepped cosine (1.4 of the maximum peaking factor). Two test assemblies with different radial power distribution have been used. One test assembly has the maximum power rods at the center of the test assembly and the other has the maximum power rods near the channel wall. The results show that the critical power performance of CYCLONE spacer is 10 to 25 % higher than that of the ferrule spacers, while the pressure drop for CYCLONE spacer is nearly equal to that of the ferrule spacer.

  11. Low-Cost High-Performance MRI

    PubMed Central

    Sarracanie, Mathieu; LaPierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5–3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  12. Towards high performance inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Gong, Xiong

    2013-03-01

    Bulk heterojunction polymer solar cells that can be fabricated by solution processing techniques are under intense investigation in both academic institutions and industrial companies because of their potential to enable mass production of flexible and cost-effective alternative to silicon-based electronics. Despite the envisioned advantages and recent technology advances, so far the performance of polymer solar cells is still inferior to inorganic counterparts in terms of the efficiency and stability. There are many factors limiting the performance of polymer solar cells. Among them, the optical and electronic properties of materials in the active layer, device architecture and elimination of PEDOT:PSS are the most determining factors in the overall performance of polymer solar cells. In this presentation, I will present how we approach high performance of polymer solar cells. For example, by developing novel materials, fabrication polymer photovoltaic cells with an inverted device structure and elimination of PEDOT:PSS, we were able to observe over 8.4% power conversion efficiency from inverted polymer solar cells.

  13. High performance graphene oxide based rubber composites.

    PubMed

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications.

  14. High Performance Graphene Oxide Based Rubber Composites

    NASA Astrophysics Data System (ADS)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-08-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications.

  15. High Performance Graphene Oxide Based Rubber Composites

    PubMed Central

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  16. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  17. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  18. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  19. High Performance Anion Chromatography of Gadolinium Chelates.

    PubMed

    Hajós, Peter; Lukács, Diana; Farsang, Evelin; Horváth, Krisztian

    2016-11-01

    High performance anion chromatography (HPIC) method to separate ionic Gd chelates, [Formula: see text], [Formula: see text], [Formula: see text] and free matrix anions was developed. At alkaline pHs, polydentate complexing agents such as ethylene-diamine-tetraacetate, diethylene-triamine pentaacetate and trans-1,2-diamine-cyclohexane-tetraacetate tend to form stable Gd chelate anions and can be separated by anion exchange. Separations were studied in the simple isocratic chromatographic run over the wide range of pH and concentration of carbonate eluent using suppressed conductivity detection. The ion exchange and complex forming equilibria were quantitatively described and demonstrated in order to understand major factors in the control of selectivity of Gd chelates. Parameters of optimized resolution between concurrent ions were presented on a 3D resolution surface. The applicability of the developed method is represented by the simultaneous analysis of Gd chelates and organic/inorganic anions. Inductively coupled plasma atomic emission spectroscopy  (ICP-AES) analysis was used for confirmation of HPIC results for Gd. Collection protocols for the heart-cutting procedure of chromatograms were applied. SPE procedures were also developed not only to extract traces of free gadolinium ions from samples, but also to remove the high level of interfering anions of the complex matrices. The limit of detection, the recoverability and the linearity of the method were also presented.

  20. High performance hand-held gas chromatograph

    SciTech Connect

    Yu, C.M.

    1998-04-28

    The Microtechnology Center of Lawrence Livermore National Laboratory has developed a high performance hand-held, real time detection gas chromatograph (HHGC) by Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this hand-held gas chromatograph is about five lbs., with a physical size of 8{close_quotes} x 5{close_quotes} x 3{close_quotes} including carrier gas and battery. It consumes about 12 watts of electrical power with a response time on the order of one to two minutes. This HHGC has an average effective theoretical plate of about 40k. Presently, its sensitivity is limited by its thermal sensitive detector at PPM. Like a conventional G.C., this HHGC consists mainly of three major components: (1) the sample injector, (2) the column, and (3) the detector with related electronics. The present HHGC injector is a modified version of the conventional injector. Its separation column is fabricated completely on silicon wafers by means of MEMS technology. This separation column has a circular cross section with a diameter of 100 pm. The detector developed for this hand-held GC is a thermal conductivity detector fabricated on a silicon nitride window by MEMS technology. A normal Wheatstone bridge is used. The signal is fed into a PC and displayed through LabView software.

  1. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  2. Energy Efficient Graphene Based High Performance Capacitors.

    PubMed

    Bae, Joonwon; Lee, Chang-Soo; Kwon, Oh Seok

    2016-10-27

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study.

  3. 75 FR 17207 - Electronic On-Board Recorders for Hours-of-Service Compliance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-05

    ...The Federal Motor Carrier Safety Administration (FMCSA) amends the Federal Motor Carrier Safety Regulations (FMCSRs) to incorporate new performance standards for electronic on-board recorders (EOBRs) installed in commercial motor vehicles (CMVs) manufactured on or after June 4, 2012. On-board hours-of-service (HOS) recording devices meeting FMCSA's current requirements and installed in CMVs......

  4. Application of advanced on-board processing concepts to future satellite communications systems

    NASA Technical Reports Server (NTRS)

    Katz, J. L.; Hoffman, M.; Kota, S. L.; Ruddy, J. M.; White, B. F.

    1979-01-01

    An initial definition of on-board processing requirements for an advanced satellite communications system to service domestic markets in the 1990's is presented. An exemplar system architecture with both RF on-board switching and demodulation/remodulation baseband processing was used to identify important issues related to system implementation, cost, and technology development.

  5. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  6. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  7. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  8. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  9. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  10. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  11. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  12. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  13. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  14. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  15. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  16. 75 FR 739 - Use of Additional Portable Oxygen Concentrator Devices on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-06

    ... Administration 14 CFR Part 121 RIN 2120-AJ55 Use of Additional Portable Oxygen Concentrator Devices on Board... amends Special Federal Aviation Regulation 106 (SFAR 106), Use of Certain Portable Oxygen Concentrator Devices on Board Aircraft, to allow for the use of four additional portable oxygen concentrator...

  17. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on board... suspend or prohibit the use of cargo handling vehicles or equipment when that use constitutes a...

  18. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on board... suspend or prohibit the use of cargo handling vehicles or equipment when that use constitutes a...

  19. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on board... suspend or prohibit the use of cargo handling vehicles or equipment when that use constitutes a...

  20. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on board... suspend or prohibit the use of cargo handling vehicles or equipment when that use constitutes a...

  1. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  2. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Services on Aircraft § 382.115 What requirements apply to on-board safety briefings? As a...

  3. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  4. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  5. Flow simulation and high performance computing

    NASA Astrophysics Data System (ADS)

    Tezduyar, T.; Aliabadi, S.; Behr, M.; Johnson, A.; Kalro, V.; Litke, M.

    1996-10-01

    Flow simulation is a computational tool for exploring science and technology involving flow applications. It can provide cost-effective alternatives or complements to laboratory experiments, field tests and prototyping. Flow simulation relies heavily on high performance computing (HPC). We view HPC as having two major components. One is advanced algorithms capable of accurately simulating complex, real-world problems. The other is advanced computer hardware and networking with sufficient power, memory and bandwidth to execute those simulations. While HPC enables flow simulation, flow simulation motivates development of novel HPC techniques. This paper focuses on demonstrating that flow simulation has come a long way and is being applied to many complex, real-world problems in different fields of engineering and applied sciences, particularly in aerospace engineering and applied fluid mechanics. Flow simulation has come a long way because HPC has come a long way. This paper also provides a brief review of some of the recently-developed HPC methods and tools that has played a major role in bringing flow simulation where it is today. A number of 3D flow simulations are presented in this paper as examples of the level of computational capability reached with recent HPC methods and hardware. These examples are, flow around a fighter aircraft, flow around two trains passing in a tunnel, large ram-air parachutes, flow over hydraulic structures, contaminant dispersion in a model subway station, airflow past an automobile, multiple spheres falling in a liquid-filled tube, and dynamics of a paratrooper jumping from a cargo aircraft.

  6. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  7. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  8. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  9. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  10. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  11. Robonaut 2 - Initial Activities On-Board the ISS

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Greene, B. D.; Joyce, Charles; De La Pena, Noe; Noblitt, Alan; Ambrose, Robert

    2011-01-01

    Robonaut 2, or R2, arrived on the International Space Station in February 2011 and is currently undergoing testing in preparation for it to become, initially, an Intra-Vehicular Activity (IVA) tool and then evolve into a system that can perform Extra-Vehicular Activities (EVA). After the completion of a series of system level checks to ensure that the robot traveled well on-board the Space Shuttle Atlantis, ground control personnel will remotely control the robot to perform free space tasks that will help characterize the differences between earth and zero-g control. For approximately one year, the fixed base R2 will perform a variety of experiments using a reconfigurable task board that was launched with the robot. While working side-by-side with human astronauts, Robonaut 2 will actuate switches, use standard tools, and manipulate Space Station interfaces, soft goods and cables. The results of these experiments will demonstrate the wide range of tasks a dexterous humanoid can perform in space and they will help refine the methodologies used to control dexterous robots both in space and here on earth. After the trial period that will evaluate R2 while on a fixed stanchion in the US Laboratory module, NASA plans to launch climbing legs that when attached to the current on-orbit R2 upper body will give the robot the ability to traverse through the Space Station and start assisting crew with general IVA maintenance activities. Multiple control modes will be evaluated in this extra-ordinary ISS test environment to prepare the robot for use during EVAs. Ground Controllers will remotely supervise the robot as it executes semi-autonomous scripts for climbing through the Space Station and interacting with IVA interfaces. IVA crew will locally supervise the robot using the same scripts and also teleoperate the robot to simulate scenarios with the robot working alone or as an assistant during space walks.

  12. High-Mileage Study of On-Board Diagnostic Emissions.

    PubMed

    Gardetto, Ed; Lindner, Jim; Bagian, Tandi

    2005-10-01

    The 1990 Clean Air Act amendments require the U.S. Environmental Protection Agency (EPA) to set guidelines for states to follow in designing and running vehicle inspection and maintenance (I/M) programs. Included in this charge was a requirement to implement an on-board diagnostic (OBD) test for both basic and enhanced I/M programs. This paper provides the results to date of an ongoing EPA study undertaken to assess the durability of the OBD system as vehicles age and as mileage is accrued. The primary results of this effort indicate the points described below. First, the majority of high-mileage vehicles tested had emission levels within their certification limits, and their malfunction indicator light (MIL) was not illuminated, indicating that the systems are capable of working throughout the life of a vehicle. Second, OBD provides better air quality benefits than an IM240 test (using the federal test procedure [FTP] as the benchmark comparison). This statement is based on greater emissions reductions from OBD-directed repairs than reductions associated with IM240-identified repairs. In general, the benefits of repairing the OBD fails were smaller, but the aggregate benefits were greater, indicating that OBD tests find both the high-emitting and a number of marginally high-emitting vehicles without false failures that can occur with any tailpipe test. Third, vehicles that truly had high-tailpipe emissions as confirmed by laboratory IM240 and FTP testing also had illuminated MILs at a statistically significant level. Last, field data from state programs have demonstrated MIL illumination rates comparable with those seen in this work, suggesting that the vehicles sampled in this study were representative of the larger fleet. Nonetheless, it is important to continue the testing of high-mileage OBD vehicles into the foreseeable future to ensure that the systems are operating correctly as the fleet ages and as changes in emission certification levels take effect.

  13. High-mileage study of on-board diagnostic emissions.

    PubMed

    Gardetto, Ed; Bagian, Tandi; Lindner, Jim

    2005-10-01

    The 1990 Clean Air Act amendments require the U.S. Environmental Protection Agency (EPA) to set guidelines for states to follow in designing and running vehicle inspection and maintenance (I/M) programs. Included in this charge was a requirement to implement an on-board diagnostic (OBD) test for both basic and enhanced I/M programs. This paper provides the results to date of an ongoing EPA study undertaken to assess the durability of the OBD system as vehicles age and as mileage is accrued. The primary results of this effort indicate the points described below. First, the majority of high-mileage vehicles tested had emission levels within their certification limits, and their malfunction indicator light (MIL) was not illuminated, indicating that the systems are capable of working throughout the life of a vehicle. Second, OBD provides better air quality benefits than an IM240 test (using the federal test procedure [FTP] as the benchmark comparison). This statement is based on greater emissions reductions from OBD-directed repairs than reductions associated with IM240-identified repairs. In general, the benefits of repairing the OBD fails were smaller, but the aggregate benefits were greater, indicating that OBD tests find both the high-emitting and a number of marginally high-emitting vehicles without false failures that can occur with any tailpipe test. Third, vehicles that truly had high-tailpipe emissions as confirmed by laboratory IM240 and FTP testing also had illuminated MILs at a statistically significant level. Last, field data from state programs have demonstrated MIL illumination rates comparable with those seen in this work, suggesting that the vehicles sampled in this study were representative of the larger fleet. Nonetheless, it is important to continue the testing of high-mileage OBD vehicles into the foreseeable future to ensure that the systems are operating correctly as the fleet ages and as changes in emission certification levels take effect.

  14. Fatigue stress detection of VIRTIS cryocoolers on board Rosetta

    NASA Astrophysics Data System (ADS)

    Giuppi, Stefano; Politi, Romolo; Capria, Maria Teresa; Piccioni, Giuseppe; De Sanctis, Maria Cristina; Erard, Stéphane; Tosi, Federico; Capaccioni, Fabrizio; Filacchione, Gianrico

    Rosetta is a planetary cornerstone mission of the European Space Agency (ESA). It is devoted to the study of minor bodies of our solar system and it will be the first mission ever to land on a comet (the Jupiter-family comet 67P/Churyumov-Gerasimenko). VIRTIS-M is a sophisticated imaging spectrometer that combines two data channels in one compact instrument, respectively for the visible and the infrared range (0.25-5.0 μm). VIRTIS-H is devoted to infrared spectroscopy (2.5-5.0 μm) with high spectral resolution. Since the satellite will be inside the tail of the comet during one of the most important phases of the mission, it would not be appropriate to use a passive cooling system, due to the high flux of contaminants on the radiator. Therefore the IR sensors are cooled by two Stirling cycle cryocoolers produced by RICOR. Since RICOR operated life tests only on ground, it was decided to conduct an analysis on VIRTIS onboard Rosetta telemetries with the purpose of study possible differences in the cryocooler performancies. The analysis led to the conclusion that cryocoolers, when operating on board, are subject to a fatigue stress not present in the on ground life tests. The telemetries analysis shows a cyclic variation in cryocooler rotor angular velocity when -M or -H or both channel are operating (it has been also noted an influence of -M channel operations in -H cryocooler rotor angular velocity and vice versa) with frequencies mostly linked to operational parameters values. The frequencies have been calculated for each mission observation applying the Fast Fourier Transform (FFT). In order to evaluate possible hedge effects it has been also applied the Hanning window to compare the results. For a more complete evaluation of cryocoolers fatigue stress, for each mission observation the angular acceleration and the angular jerk have been calculated.

  15. Hindering Factors of Beginning Teachers' High Performance in Higher Education Pakistan: Case Study of IUB

    ERIC Educational Resources Information Center

    Sarwar, Shakeel; Aslam, Hassan Danyal; Rasheed, Muhammad Imran

    2012-01-01

    Purpose: The aim of the researchers in this endeavor is to identify the challenges and obstacles faced by beginning teachers in higher education. This study also explores practical implications and what adaptation can be utilized in order to have high performance of the beginning teachers. Design/methodology/approach: Researchers have applied…

  16. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  17. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  18. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle.

    PubMed

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-06-19

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver.

  19. On-board Processing to Advance the PanFTS Imaging System for GEO-CAPE

    NASA Astrophysics Data System (ADS)

    Sander, S. P.; Pingree, P.; Bekker, D. L.; Blavier, J. L.; Bryk, M.; Franklin, B.; Hayden, J.; Ryan, M.; Werne, T. A.

    2013-12-01

    The Panchromatic Fourier Transform Spectrometer (PanFTS) is an imaging instrument designed to record atmospheric spectra of the Earth from the vantage point of a geosynchronous orbit. Each observation covers a scene of 128x128 pixels. In order to retrieve multiple chemical families and perform passive vertical profiling, the recorded spectra will cover a wide wavelength range, from the thermal infrared to the near ultraviolet. The small size of the nadir ground-sampling distance and the desire to re-visit each scene hourly result in a PanFTS design that challenges the downlink capabilities of current radio communication. The PanFTS on-board processing will reduce downlink rates by converting time-domain interferograms to band-limited spectra, hence achieving a factor 20 in data reduction. In this paper, we report on the first year progress of this NASA AIST-11 task and on the adaptation of existing Virtex-5 FPGA designs to support the PanFTS Focal Plane Array control and data interfaces. We have produced a software demonstration of the current PanFTS data reduction algorithms. The real-time processing of the interferometer metrology laser signal is the first step required for the conversion of time-domain interferograms to path difference. This laser processing is now performed entirely as digital signal processing inside the Virtex-5 FPGA and also allows for tip/tilt correction of the interferometer mirrors, a task that was previously performed only with complicated and inflexible analog electronics.

  20. High-Performance, Space-Storable, Bi-Propellant Program Status

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    2002-01-01

    Bipropellant propulsion systems currently represent the largest bus subsystem for many missions. These missions range from low Earth orbit satellite to geosynchronous communications and planetary exploration. The payoff of high performance bipropellant systems is illustrated by the fact that Aerojet Redmond has qualified a commercial NTO/MMH engine based on the high Isp technology recently delivered by this program. They are now qualifying a NTO/hydrazine version of this engine. The advanced rhenium thrust chambers recently provided by this program have raised the performance of earth storable propellants from 315 sec to 328 sec of specific impulse. The recently introduced rhenium technology is the first new technology introduced to satellite propulsion in 30 years. Typically, the lead time required to develop and qualify new chemical thruster technology is not compatible with program development schedules. These technology development programs must be supported by a long term, Base R&T Program, if the technology s to be matured. This technology program then addresses the need for high performance, storable, on-board chemical propulsion for planetary rendezvous and descent/ascent. The primary NASA customer for this technology is Space Science, which identifies this need for such programs as Mars Surface Return, Titan Explorer, Neptune Orbiter, and Europa Lander. High performance (390 sec) chemical propulsion is estimated to add 105% payload to the Mars Sample Return mission or alternatively reduce the launch mass by 33%. In many cases, the use of existing (flight heritage) propellant technology is accommodated by reducing mission objectives and/or increasing enroute travel times sacrificing the science value per unit cost of the program. Therefore, a high performance storable thruster utilizing fluorinated oxidizers with hydrazine is being developed.

  1. Thermal interface pastes nanostructured for high performance

    NASA Astrophysics Data System (ADS)

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  2. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  3. Rotational artifacts in on-board cone beam computed tomography.

    PubMed

    Ali, E S M; Webb, R; Nyiri, B J

    2015-02-21

    Rotational artifacts in image guidance systems lead to registration errors that affect non-isocentric treatments and dose to off-axis organs-at-risk. This study investigates a rotational artifact in the images acquired with the on-board cone beam computed tomography system XVI (Elekta, Stockholm, Sweden). The goals of the study are to identify the cause of the artifact, to characterize its dependence on other quantities, and to investigate possible solutions. A 30 cm diameter cylindrical phantom is used to acquire clockwise and counterclockwise scans at five speeds (120 to 360 deg min(-1)) on six Elekta linear accelerators from three generations (MLCi, MLCi2 and Agility). Additional scans are acquired with different pulse widths and focal spot sizes for the same mAs. Image quality is evaluated using a common phantom with an in-house three dimensional contrast transfer function attachment. A robust, operator-independent analysis is developed which quantifies rotational artifacts with 0.02° accuracy and imaging system delays with 3 ms accuracy. Results show that the artifact is caused by mislabelling of the projections with a lagging angle due to various imaging system delays. For the most clinically used scan speed (360 deg min(-1)), the artifact is ∼0.5°, which corresponds to ∼0.25° error per scan direction with the standard Elekta procedure for angle calibration. This leads to a 0.5 mm registration error at 11 cm off-center. The artifact increases linearly with scan speed, indicating that the system delay is independent of scan speed. For the most commonly used pulse width of 40 ms, this delay is 34 ± 1 ms, part of which is half the pulse width. Results are consistent among the three linac generations. A software solution that corrects the angles of individual projections is shown to eliminate the rotational error for all scan speeds and directions. Until such a solution is available from the manufacturer, three clinical solutions are presented, which

  4. Rotational artifacts in on-board cone beam computed tomography

    NASA Astrophysics Data System (ADS)

    Ali, E. S. M.; Webb, R.; Nyiri, B. J.

    2015-02-01

    Rotational artifacts in image guidance systems lead to registration errors that affect non-isocentric treatments and dose to off-axis organs-at-risk. This study investigates a rotational artifact in the images acquired with the on-board cone beam computed tomography system XVI (Elekta, Stockholm, Sweden). The goals of the study are to identify the cause of the artifact, to characterize its dependence on other quantities, and to investigate possible solutions. A 30 cm diameter cylindrical phantom is used to acquire clockwise and counterclockwise scans at five speeds (120 to 360 deg min-1) on six Elekta linear accelerators from three generations (MLCi, MLCi2 and Agility). Additional scans are acquired with different pulse widths and focal spot sizes for the same mAs. Image quality is evaluated using a common phantom with an in-house three dimensional contrast transfer function attachment. A robust, operator-independent analysis is developed which quantifies rotational artifacts with 0.02° accuracy and imaging system delays with 3 ms accuracy. Results show that the artifact is caused by mislabelling of the projections with a lagging angle due to various imaging system delays. For the most clinically used scan speed (360 deg min-1), the artifact is ˜0.5°, which corresponds to ˜0.25° error per scan direction with the standard Elekta procedure for angle calibration. This leads to a 0.5 mm registration error at 11 cm off-center. The artifact increases linearly with scan speed, indicating that the system delay is independent of scan speed. For the most commonly used pulse width of 40 ms, this delay is 34 ± 1 ms, part of which is half the pulse width. Results are consistent among the three linac generations. A software solution that corrects the angles of individual projections is shown to eliminate the rotational error for all scan speeds and directions. Until such a solution is available from the manufacturer, three clinical solutions are presented, which reduce the

  5. Measuring Organic Matter with COSIMA on Board Rosetta

    NASA Astrophysics Data System (ADS)

    Briois, C.; Baklouti, D.; Bardyn, A.; Cottin, H.; Engrand, C.; Fischer, H.; Fray, N.; Godard, M.; Hilchenbach, M.; von Hoerner, H.; Höfner, H.; Hornung, K.; Kissel, J.; Langevin, Y.; Le Roy, L.; Lehto, H.; Lehto, K.; Orthous-Daunay, F. R.; Revillet, C.; Rynö, J.; Schulz, R.; Silen, J. V.; Siljeström, S.; Thirkell, L.

    2014-12-01

    Comets are believed to contain the most pristine material of our Solar System materials and therefore to be a key to understand the origin of the Solar System, and the origin of life. Remote sensing observations have led to the detection of more than twenty simple organic molecules (Bockelée-Morvan et al., 2004; Mumma and Charnley, 2011). Experiments on-board in-situ exploration missions Giotto and Vega and the recent Stardust sample return missions have shown that a significant fraction of the cometary grains consists of organic matter. Spectra showed that both the gaseous (Mitchell et al., 1992) and the solid phase (grains) (Kissel and Krueger, 1987) contained organic molecules with higher masses than those of the molecules detected by remote sensing techniques in the gaseous phase. Some of the grains analyzed in the atmosphere of comet 1P/Halley seem to be essentially made of a mixture of carbon, hydrogen, oxygen and nitrogen (CHON grains, Fomenkova, 1999). Rosetta is an unparalleled opportunity to make a real breakthrough into the nature of cometary matter, both in the gas and in the solid phase. The dust mass spectrometer COSIMA on Rosetta will analyze organic and inorganic phases in the dust. The organic phases may be refractory, but some organics may evaporate with time from the dust and lead to an extended source in the coma. Over the last years, we have prepared the cometary rendezvous by the analysis of various samples with the reference model of COSIMA. We will report on this calibration data set and on the first results of the in-situ analysis of cometary grains as captured, imaged and analyzed by COSIMA. References : Bockelée-Morvan, D., et al. 2004. (Eds.), Comets II. the University of Arizona Press, Tucson, USA, pp. 391-423 ; Fomenkova, M.N., 1999. Space Science Reviews 90, 109-114 ; Kissel, J., Krueger, F.R., 1987. Nature 326, 755-760 ; Mitchell, et al. 1992. Icarus 98, 125-133 ; Mumma, M.J., Charnley, S.B., 2011. Annual Review of Astronomy and

  6. Optical multiple access techniques for on-board routing

    NASA Technical Reports Server (NTRS)

    Mendez, Antonio J.; Park, Eugene; Gagliardi, Robert M.

    1992-01-01

    The purpose of this research contract was to design and analyze an optical multiple access system, based on Code Division Multiple Access (CDMA) techniques, for on board routing applications on a future communication satellite. The optical multiple access system was to effect the functions of a circuit switch under the control of an autonomous network controller and to serve eight (8) concurrent users at a point to point (port to port) data rate of 180 Mb/s. (At the start of this program, the bit error rate requirement (BER) was undefined, so it was treated as a design variable during the contract effort.) CDMA was selected over other multiple access techniques because it lends itself to bursty, asynchronous, concurrent communication and potentially can be implemented with off the shelf, reliable optical transceivers compatible with long term unattended operations. Temporal, temporal/spatial hybrids and single pulse per row (SPR, sometimes termed 'sonar matrices') matrix types of CDMA designs were considered. The design, analysis, and trade offs required by the statement of work selected a temporal/spatial CDMA scheme which has SPR properties as the preferred solution. This selected design can be implemented for feasibility demonstration with off the shelf components (which are identified in the bill of materials of the contract Final Report). The photonic network architecture of the selected design is based on M(8,4,4) matrix codes. The network requires eight multimode laser transmitters with laser pulses of 0.93 ns operating at 180 Mb/s and 9-13 dBm peak power, and 8 PIN diode receivers with sensitivity of -27 dBm for the 0.93 ns pulses. The wavelength is not critical, but 830 nm technology readily meets the requirements. The passive optical components of the photonic network are all multimode and off the shelf. Bit error rate (BER) computations, based on both electronic noise and intercode crosstalk, predict a raw BER of (10 exp -3) when all eight users are

  7. Numerical simulation of observations with GOLF on board SOHO

    NASA Astrophysics Data System (ADS)

    Garcia, R. A.; Roca Cortes, T.; Regulo, C.

    1998-03-01

    The main objective of the GOLF Experiment (Global Oscillations at Low Frequencies) on-board the SOHO (Solar and Heliospheric Observatory) space mission is the quantitative knowledge of the internal structure of the Sun by measuring the spectrum of its global oscillations in a wide frequency range (30 nHz to 6 mHz). There is special interest in detecting the low l p- and g-modes (low frequency modes) which penetrate deeply down into the solar core. The instrument chosen is an improved disk-integrated sunlight resonant scattering spectrophotometer. It obtains the line of sight velocity of the integrated visible solar surface by measuring the Doppler shift of the sodium doublet. Mainly, two innovations have been incorporated to standard earth-based similar apparatus (those from the networks IRIS and BISON). First, GOLF samples each line of the sodium doublet in principle at four points on its wings, using an extra small modulated magnetic field. This new information enables an instantaneous calibration of the measured signal and also opens the possibility to correct from the background solar velocity noise. Second, the use of an extra fixed quarter wave plate, placed at the entrance of the instrument, enables a selection of the circularly polarized solar light. Therefore, the disk averaged solar line-of-sight component of the magnetic field can also be obtained. This is considered as a secondary objective of the mission. In order to study the new information available due to these improvements in the apparatus, the necessity of fully understanding it and the need to write the appropriate software to analyze the data, a complete numerical simulation of the experiment has been built. Running the simulation has yielded two series of 12 months long each, one corresponding to a year of maximum solar activity and the other to a year of minimum solar activity. In this paper the numerical simulation of the GOLF experiment is presented, its sensitivity and instrumental

  8. High Performance Liquid Chromatography/Video Fluorometry. Part I. Instrumentation.

    DTIC Science & Technology

    1981-09-30

    High Performance Liquid Chromatography /Video...PERIOD COVERED High Performance Liquid Chromatography /Video .. / Fluorometry. Part I. Instrumentation. . Interim/ echnicaliepart,. 6. PERFORMING ORG...34Entered SECURITY CLASSIFICATION OF THIS OlAGE (When Data Entered) II1| III I I I I E I II ... .. High Performance Liquid Chromatography

  9. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  10. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  11. High performance ultrasonic field simulation on complex geometries

    NASA Astrophysics Data System (ADS)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  12. HTML 5 Displays for On-Board Flight Systems

    NASA Technical Reports Server (NTRS)

    Silva, Chandika

    2016-01-01

    During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different

  13. Superconducting magnet and on-board refrigeration system on Japanese MAGLEV vehicle

    SciTech Connect

    Tsuchishima, H.; Herai, T. )

    1991-03-01

    This paper reports on a superconducting magnet and on-board refrigeration system on Japanese MAGLEV vehicles. Running tests on the Miyazaki test track are repeatedly carried out at speeds over 300 km/h using the MAGLEV vehicle, MLU002. The development of the MAGLEV system for the new test line has already started, and a new superconducting magnet for it has been manufactured. An on-board refrigerator is installed in the superconducting magnet to keep the liquid helium temperature without the loss of liquid helium. The helium gas produced when energizing or de-energizing the magnet is stored in on-board gas helium tanks temporarily. The on-board refrigerator is connected directly to the liquid helium tank of the magnet.

  14. On-Board Engine Exhaust Particulate Matter Sensor for HCCI and Conventional Diesel Engines

    SciTech Connect

    Hall, Matt; Matthews, Ron

    2011-09-30

    The goal of the research was to refine and complete development of an on-board particulate matter (PM) sensor for diesel, DISI, and HCCI engines, bringing it to a point where it could be commercialized and marketed.

  15. On-Board Preventive Maintenance: Analysis of Effectiveness Optimal Duty Period

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalaj, Leon; Hecht, Herbert

    1996-01-01

    To maximize reliability of a spacecraft which performs long-life (over 10-year), deep-space mission (to outer planet), a fault-tolerant environment incorporating automatic on-board preventive maintenance is highly desirable.

  16. On Board Sensor Network: A Proof of Concept Aiming at Telecom I/O Optimisation

    NASA Astrophysics Data System (ADS)

    Gunes-Lasnet, S.; Furano, G.; Melicher, M.; Gleeson, D.; O'Connor, W.; Vidaud, O.; Notebaert, O.

    2009-05-01

    On-board sensor networks proof of concept is part of a long haul strategy shared between ESA and the European industry. Because point to point interfaces are numerous in a spacecraft, initiatives to standardise them or replace them by bus solutions have been seeked commonly between ESA and the industry. The sensor networks project presented in this paper aims at defining and prototyping a solution for spacecraft on board sensor networks, and to perform a proof of concept with the resulting demonstrator.

  17. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  18. Islet in weightlessness: Biological experiments on board COSMOS 1129 satellite

    NASA Technical Reports Server (NTRS)

    Zhuk, Y.

    1980-01-01

    Biological experiments planned as an international venture for COSMOS 1129 satellite include tests of: (1) adaptation of rats to conditions of weightlessness, and readaption to Earth's gravity; (2) possibility of fertilization and embryonic development in weightlessness; (3) heat exchange processes; (4) amount of gravity force preferred by fruit flies for laying eggs (given a choice of three centrifugal zones); (5) growth of higher plants from seeds; (6) effects of weightlessness on cells in culture and (7) radiation danger from heavy nuclei, and electrostatic protection from charged particles.

  19. High Performance Liquid Chromatography/Video Fluorometry. Part II. Applications.

    DTIC Science & Technology

    1981-09-30

    HIGH PERFORMANCE LIQUID CHROMATOGRAPHY /VIDEO FLUOROMETRY. PART...REP«T_N&:-ŗ/ High Performance Liquid Chromatography /Video Fluorometry» Part II. Applications« by | Dennis C./Shelly* Michael P./Vogarty and...Data EnlirtdJ REPORT DOCUMENTATION PAGE t. REPORT NUMBER 2 GOVT ACCESSION NO 4. T1TI.F (and Submit) lP-^fffsyva High Performance Liquid Chromatography

  20. Overview of the Waveform Capture in the Lunar Radar Sounder on board KAGUYA

    NASA Astrophysics Data System (ADS)

    Kasahara, Y.; Goto, Y.; Hashimoto, K.; Imachi, T.; Kumamoto, A.; Ono, T.; Matsumoto, H.

    2007-12-01

    The Lunar explorer "gKAGUYA"h (SELENE) spacecraft will be launched on September 13, 2007. The Lunar Radar Sounder (LRS) is one of the scientific instruments on board KAGUYA. It consists of three subsystems: the sounder observation (SDR), the natural plasma wave receiver (NPW), and the waveform capture (WFC). The WFC is a high-performance and multifunctional software receiver in which most functions are realized by the onboard software implemented in a digital signal processor (DSP). The WFC consists of a fast-sweep frequency analyzer (WFC-H) covering the frequency range from 1 kHz to 1 MHz and a waveform receiver (WFC-L) in the frequency range from 10 Hz to 100 kHz. The amount of raw data from the plasma wave instrument is huge because the scientific objectives require the covering of a wide frequency range with high time and frequency resolution; furthermore, a variety of operation modes are needed to meet these scientific objectives. In addition, new techniques such as digital filtering, automatic filter selection, and data compression are implemented for data processing of the WFC-L to extract the important data adequately under the severe restriction of total amount of telemetry data. Because of the flexibility of the instruments, various kinds of observation modes can be achieved, and we expect the WFC to generate many interesting data. By taking advantage of a moon orbiter, the WFC is expected to measure plasma waves and radio emissions that are generated around the moon and/or that originated from the sun and from the earth and other planets. One of the phenomena of most interest to be obtained from the WFC data is the dynamics of lunar wake as a result of solar wind-moon interaction. Another scientific topic in the field of lunar plasma physics concerns the minimagnetosphere caused by the magnetic anomaly of the moon. There are various kinds of other plasma waves to be observed from the moon such as Auroral Kilometric Radiation, electrostatic solitary wave

  1. Automatic registration between reference and on-board digital tomosynthesis images for positioning verification.

    PubMed

    Ren, Lei; Godfrey, Devon J; Yan, Hui; Wu, Q Jackie; Yin, Fang-Fang

    2008-02-01

    The authors developed a hybrid multiresolution rigid-body registration technique to automatically register reference digital tomosynthesis (DTS) images with on-board DTS images to guide patient positioning in radiation therapy. This hybrid registration technique uses a faster but less accurate static method to achieve an initial registration, followed by a slower but more accurate adaptive method to fine tune the registration. A multiresolution scheme is employed in the registration to further improve the registration accuracy, robustness, and efficiency. Normalized mutual information is selected as the criterion for the similarity measure and the downhill simplex method is used as the search engine. This technique was tested using image data both from an anthropomorphic chest phantom and from eight head-and-neck cancer patients. The effects of the scan angle and the region-of-interest (ROI) size on the registration accuracy and robustness were investigated. The necessity of using the adaptive registration method in the hybrid technique was validated by comparing the results of the static method and the hybrid method. With a 44 degrees scan angle and a large ROI covering the entire DTS volume, the average of the registration capture ranges in single-axis simulations was between -31 and +34 deg for rotations and between -89 and +78 mm for translations in the phantom study, and between -38 and +38 deg for rotations and between -58 and +65 mm for translations in the patient study. Decreasing the DTS scan angle from 44 degrees to 22 degrees mainly degraded the registration accuracy and robustness for the out-of-plane rotations. Decreasing the ROI size from the entire DTS volume to the volume surrounding the spinal cord reduced the capture ranges to between -23 and +18 deg for rotations and between -33 and +43 mm for translations in the phantom study, and between -18 and +25 deg for rotations and between -35 and +39 mm for translations in the patient study. Results also

  2. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  3. Building Synergy: The Power of High Performance Work Systems.

    ERIC Educational Resources Information Center

    Gephart, Martha A.; Van Buren, Mark E.

    1996-01-01

    Suggests that high-performance work systems create the synergy that lets companies gain and keep a competitive advantage. Identifies the components of high-performance work systems and critical action steps for implementation. Describes the results companies such as Xerox, Lever Brothers, and Corning Incorporated have achieved by using them. (JOW)

  4. 24 CFR 902.71 - Incentives for high performers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... reviews and less monitoring), effective upon notification of high performer designation. (ii) The... points in funding competitions. A high performer PHA will be eligible for bonus points in HUD's funding competitions, where such bonus points are not restricted by statute or regulation governing the funding...

  5. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.

  6. An Analysis of a High Performing School District's Culture

    ERIC Educational Resources Information Center

    Corum, Kenneth D.; Schuetz, Todd B.

    2012-01-01

    This report describes a problem based learning project focusing on the cultural elements of a high performing school district. Current literature on school district culture provides numerous cultural elements that are present in high performing school districts. With the current climate in education placing pressure on school districts to perform…

  7. The Gamma-Ray Burst On-board Trigger ECLAIRs of SVOM

    NASA Astrophysics Data System (ADS)

    Schanne, Stephane

    2016-07-01

    SVOM, the Space-based multi-band astronomical Variable Objects Monitor, is a French-Chinese satellite mission for Gamma-Ray Burst studies. The conclusion of the Phase B studies is scheduled in 2016 and the launch is foreseen in 2021. With its set of 4 on-board instruments as well as dedicated ground instruments, SVOM will study GBRs in great detail, including their temporal and spectral properties from visible to gamma-rays. The coded-mask telescope ECLAIRs on-board SVOM with its Burst On-board Trigger system analyzes in real-time a 2 sr portion of the sky in the 4-120 keV energy range to detect and localize the GRBs. It then requests the spacecraft slew to allow GRB follow-up observations by the on-board narrow field-of-view telescopes MXT in X-rays and VT in the visible, and informs the community of observers via a dedicated ground network. This paper gives an update on the status of ECLAIRs and its Burst On-board Trigger system.

  8. OnBoard Parameter Identification for a Small UAV

    NASA Astrophysics Data System (ADS)

    McGrail, Amanda K.

    One of the main research focus areas of the WVU Flight Control Systems Laboratory (FCSL) is the increase of flight safety through the implementation of fault tolerant control laws. For some fault tolerant flight control approaches with adaptive control laws, the availability of accurate post failure aircraft models improves performance. While look-up tables of aircraft models can be created for failure conditions, they may fail to account for all possible failure scenarios. Thus, a real-time parameter identification program eliminates the need to have predefined models for all potential failure scenarios. The goal of this research was to identify the dimensional stability and control derivatives of the WVU Phastball UAV in flight using a frequency domain based real-time parameter identification (PID) approach. The data necessary for this project was gathered using the WVU Phastball UAV, a radio-controlled aircraft designed and built by the FCSL for fault tolerant control research. Maneuvers designed to excite the natural dynamics of the aircraft were implemented by the pilot or onboard computer during the steady state portions of flights. The data from these maneuvers was used for this project. The project was divided into three main parts: 1) off-line time domain PID, 2) off-line frequency domain PID, and 3) an onboard frequency domain PID. The off-line parameter estimation programs, in both frequency domain and time domain, utilized the well known Maximum Likelihood Estimator with Newton-Raphson minimization with starting values estimated from a Least-Squares Estimate of the non-dimensional stability and control derivatives. For the frequency domain approach, both the states and inputs were first converted to the frequency domain using a Fourier integral over the frequency range in which the rigid body aircraft dynamics are found. The final phase of the project was a real-time parameter estimation program to estimate the dimensional stability and control

  9. Biotechnological experiments in space flights on board of space stations

    NASA Astrophysics Data System (ADS)

    Nechitailo, Galina S.

    2012-07-01

    Space flight conditions are stressful for any plant and cause structural-functional transition due to mobiliation of adaptivity. In space flight experiments with pea tissue, wheat and arabidopsis we found anatomical-morphological transformations and biochemistry of plants. In following experiments, tissue of stevia (Stevia rebaudiana), potato (Solanum tuberosum), callus culture and culture and bulbs of suffron (Crocus sativus), callus culture of ginseng (Panax ginseng) were investigated. Experiments with stevia carried out in special chambers. The duration of experiment was 8-14 days. Board lamp was used for illumination of the plants. After experiment the plants grew in the same chamber and after 50 days the plants were moved into artificial ionexchange soil. The biochemical analysis of plants was done. The total concentration of glycozides and ratio of stevioside and rebauside were found different in space and ground plants. In following generations of stevia after flight the total concentration of stevioside and rebauside remains higher than in ground plants. Experiments with callus culture of suffron carried out in tubes. Duration of space flight experiment was 8-167 days. Board lamp was used for illumination of the plants. We found picrocitina pigment in the space plants but not in ground plants. Tissue culture of ginseng was grown in special container in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 167 days. Biological activity of space flight culutre was in 5 times higher than the ground culture. This difference was observed after recultivation of space flight samples on Earth during year after flight. Callus tissue of potato was grown in tubes in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 14 days. Concentration of regenerates in flight samples was in 5 times higher than in ground samples. The space flight experiments show, that microgravity and other

  10. A technique for on-board CT reconstruction using both kilovoltage and megavoltage beam projections for 3D treatment verification.

    PubMed

    Yin, Fang-Fang; Guan, Huaiqun; Lu, Wenkai

    2005-09-01

    The technologies with kilovoltage (kV) and megavoltage (MV) imaging in the treatment room are now available for image-guided radiation therapy to improve patient setup and target localization accuracy. However, development of strategies to efficiently and effectively implement these technologies for patient treatment remains challenging. This study proposed an aggregated technique for on-board CT reconstruction using combination of kV and MV beam projections to improve the data acquisition efficiency and image quality. These projections were acquired in the treatment room at the patient treatment position with a new kV imaging device installed on the accelerator gantry, orthogonal to the existing MV portal imaging device. The projection images for a head phantom and a contrast phantom were acquired using both the On-Board Imager kV imaging device and the MV portal imager mounted orthogonally on the gantry of a Varian Clinac 21EX linear accelerator. MV projections were converted into kV information prior to the aggregated CT reconstruction. The multilevel scheme algebraic-reconstruction technique was used to reconstruct CT images involving either full, truncated, or a combination of both full and truncated projections. An adaptive reconstruction method was also applied, based on the limited numbers of kV projections and truncated MV projections, to enhance the anatomical information around the treatment volume and to minimize the radiation dose. The effects of the total number of projections, the combination of kV and MV projections, and the beam truncation of MV projections on the details of reconstructed kV/MV CT images were also investigated.

  11. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  12. On-board Attitude Determination System (OADS). [for advanced spacecraft missions

    NASA Technical Reports Server (NTRS)

    Carney, P.; Milillo, M.; Tate, V.; Wilson, J.; Yong, K.

    1978-01-01

    The requirements, capabilities and system design for an on-board attitude determination system (OADS) to be flown on advanced spacecraft missions were determined. Based upon the OADS requirements and system performance evaluation, a preliminary on-board attitude determination system is proposed. The proposed OADS system consists of one NASA Standard IRU (DRIRU-2) as the primary attitude determination sensor, two improved NASA Standard star tracker (SST) for periodic update of attitude information, a GPS receiver to provide on-board space vehicle position and velocity vector information, and a multiple microcomputer system for data processing and attitude determination functions. The functional block diagram of the proposed OADS system is shown. The computational requirements are evaluated based upon this proposed OADS system.

  13. Evaluation of the use of on-board spacecraft energy storage for electric propulsion missions

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Palmer, F. M.

    1983-01-01

    On-board spacecraft energy storage represents an under utilized resource for some types of missions that also benefit from using relatively high specific impulse capability of electric propulsion. This resource can provide an appreciable fraction of the power required for operating the electric propulsion subsystem in some missions. The most probable mission requirement for utilization of this energy is that of geostationary satellites which have secondary batteries for operating at high power levels during eclipse. The study summarized in this report selected four examples of missions that could benefit from use of electric propulsion and on-board energy storage. Engineering analyses were performed to evaluate the mass saved and economic benefit expected when electric propulsion and on-board batteries perform some propulsion maneuvers that would conventionally be provided by chemical propulsion. For a given payload mass in geosynchronous orbit, use of electric propulsion in this manner typically provides a 10% reduction in spacecraft mass.

  14. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean  ±  SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.

  15. Toward a new metric for ranking high performance computing systems.

    SciTech Connect

    Heroux, Michael Allen; Dongarra, Jack.

    2013-06-01

    The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

  16. ARPHA: An Innovative On-Board FDIR Reasoning Engine for Autonomous System

    NASA Astrophysics Data System (ADS)

    Guiotto, Andrea; Portinale, Luigi; Codetta-Raiteri, Daniele; Yushtein, Yuri

    2012-08-01

    In the frame of the European Space Agency (ESA) studies, Thales Alenia Space has carried out a research - VERIFIM - in collaboration with Universita’ del Piemonte Orientale, implementing a software prototype called ARPHA (Anomaly Resolution and Prognostic Health management for Autonomy) for on-board diagnosis, prognosis and recovery. It is an innovative on-board FDIR (Failure Detection, Isolation and Recovery) reasoning engine for autonomous systems, based on the inference techniques that use Dynamic Probabilistic Graphical Models. It started in June 2010 and ended in December 2011.

  17. Conceptual design of an on-board optical processor with components

    NASA Technical Reports Server (NTRS)

    Walsh, J. R.; Shackelford, R. G.

    1977-01-01

    The specification of components for a spacecraft on-board optical processor was investigated. A space oriented application of optical data processing and the investigation of certain aspects of optical correlators were examined. The investigation confirmed that real-time optical processing has made significant advances over the past few years, but that there are still critical components which will require further development for use in an on-board optical processor. The devices evaluated were the coherent light valve, the readout optical modulator, the liquid crystal modulator, and the image forming light modulator.

  18. Safety in earth orbit study. Volume 2: Analysis of hazardous payloads, docking, on-board survivability

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Detailed and supporting analyses are presented of the hazardous payloads, docking, and on-board survivability aspects connected with earth orbital operations of the space shuttle program. The hazards resulting from delivery, deployment, and retrieval of hazardous payloads, and from handling and transport of cargo between orbiter, sortie modules, and space station are identified and analyzed. The safety aspects of shuttle orbiter to modular space station docking includes docking for assembly of space station, normal resupply docking, and emergency docking. Personnel traffic patterns, escape routes, and on-board survivability are analyzed for orbiter with crew and passenger, sortie modules, and modular space station, under normal, emergency, and EVA and IVA operations.

  19. The Solar Spectral Irradiance Measured on Board the International Space Station and the Picard Spacecraft

    NASA Astrophysics Data System (ADS)

    Thuillier, G. O.; Bolsee, D.; Schmidtke, G.; Schmutz, W. K.

    2011-12-01

    On board the International Space Station, the spectrometers SOL-ACES and SOLSPEC measure the solar spectrum irradiance from 17 to 150 nm and 170 to 2900 nm, respectively. On board PICARD launched on 15 June 2010, the PREMOS instrument consists in a radiometer and several sunphotometers operated at several fixed wavelengths. We shall present spectra at different solar activity levels as well as their quoted accuracy. Comparison with similar data from other missions presently running in space will be shown incorporating the PREMOS measurements. Some special solar events will be also presented and interpreted.

  20. Real-Time On-Board HMS/Inspection Capability for Propulsion and Power Systems

    NASA Technical Reports Server (NTRS)

    Barkhoudarian, Sarkis

    2005-01-01

    Presently, the evaluation of the health of space propulsion systems includes obtaining and analyzing limited flight data and extensive post flight performance, operational and inspection data. This approach is not practical for deep-space missions due to longer operational times, lack of in-space inspection facility, absence of timely ground commands and very long repair intervals. This paper identifies the on-board health- management/inspection needs of deep-space propulsion and thermodynamic power-conversion systems. It also describes technologies that could provide on-board inspection and more comprehensive health management for more successful missions.

  1. Tiny biomedical amplifier combines high performance, low power drain

    NASA Technical Reports Server (NTRS)

    Deboo, G. J.

    1965-01-01

    Transistorized, portable, high performance amplifier with low power drain facilitates biomedical studies on mobile subjects. This device, which utilizes a differential input to obtain a common-mode rejection, is used for amplifying electrocardiogram and electromyogram signals.

  2. High Performance Schools--It's a No-Brainer.

    ERIC Educational Resources Information Center

    Nicklas, Mike

    2002-01-01

    A North Carolina middle school demonstrates that high performance, sustainable school buildings cost no more to build and are more comfortable and productive learning environments than conventional buildings. (Author)

  3. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    SciTech Connect

    2004-11-01

    Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

  4. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Rotor dynamic instability problems in high performance turbomachinery are reviewed. Mechanical instability mechanisms are discussed. Seal forces and working fluid forces in turbomachinery are discussed. Control of rotor instability is also investigated.

  5. Exploring KM Features of High-Performance Companies

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Wen

    2007-12-01

    For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.

  6. Variational formulation of high performance finite elements: Parametrized variational principles

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmello

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.

  7. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  8. Development of Synthetic Spider Silk Fibers for High Performance Applications

    DTIC Science & Technology

    2013-08-08

    the conditions for syringe pump extrusion of silk fibers often seen in literature do not translate to industrial scale wet spinning. The magnitudes...REPORT Development of Synthetic Spider Silk Fibers for High Performance Applications 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The overall goal of...this project is to demonstrate the feasibility of synthetic production of high-performance spider silk fibers for use in next-generation automotives

  9. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  10. The cyclic fatigue of high-performance fibers

    NASA Astrophysics Data System (ADS)

    Kerr, M.; Chawla, N.; Chawla, K. K.

    2005-02-01

    High-performance fibers are virtually ubiquitous in our everyday lives. In a variety of structural applications, fibers and fiber-reinforced composites are subjected to cyclic mechanical loading. This paper reviews the fatigue behavior of some common high-performance fibers such as polymer, metal, and ceramic fibers. Fatigue mechanisms unique to each type of fiber are identified and a description of fatigue damage and fracture is provided.

  11. High performance pipelined multiplier with fast carry-save adder

    NASA Technical Reports Server (NTRS)

    Wu, Angus

    1990-01-01

    A high-performance pipelined multiplier is described. Its high performance results from the fast carry-save adder basic cell which has a simple structure and is suitable for the Gate Forest semi-custom environment. The carry-save adder computes the sum and carry within two gate delay. Results show that the proposed adder can operate at 200 MHz for a 2-micron CMOS process; better performance is expected in a Gate Forest realization.

  12. Reduction in redundancy of multichannel telemetric information by the method of adaptive discretization with associative sorting

    NASA Technical Reports Server (NTRS)

    Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.

    1974-01-01

    The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.

  13. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  14. 29 CFR 1915.506 - Hazards of fixed extinguishing systems on board vessels and vessel sections.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... extinguishing systems on board vessels and vessel sections. (a) Employer responsibilities. The employer must... protected spaces sufficient to impede escape. (e) Testing the system. (1) When testing a fixed extinguishing... protected space. (f) Conducting system maintenance. Before conducting maintenance on a fixed...

  15. Marine Technician's Handbook, Instructions for Taking Air Samples on Board Ship: Carbon Dioxide Project.

    ERIC Educational Resources Information Center

    Keeling, Charles D.

    This booklet is one of a series intended to provide explicit instructions for the collection of oceanographic data and samples at sea. The methods and procedures described have been used by the Scripps Institution of Oceanography and found reliable and up-to-date. Instructions are given for taking air samples on board ship to determine the…

  16. Autonomous Defensive Space Control via On-Board Artificial Neural Networks

    DTIC Science & Technology

    2007-04-01

    by investing the necessary resources for the development of space-based neural networks . An artificial neural network (ANN) or commonly just neural...processing capability could potentially enable the placement of neural networks , requiring significant processing power and storage capacity, on-board

  17. BRESEX: On board supervision, basic architecture and preliminary aspects for payload and space shuttle interface

    NASA Technical Reports Server (NTRS)

    Bergamini, E. W.; Depaula, A. R., Jr.; Martins, R. C. D. O.

    1984-01-01

    Data relative to the on board supervision subsystem are presented which were considered in a conference between INPE and NASA personnel, with the purpose of initiating a joint effort leading to the implementation of the Brazilian remote sensing experiment - (BRESEX). The BRESEX should consist, basically, of a multispectral camera for Earth observation, to be tested in a future space shuttle flight.

  18. Astronauts Schirra and Stafford talk to crewmen on board the U.S.S. Wasp

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Astronauts Walter M. Schirra Jr. (left), command pilot, and Thomas P. Stafford, pilot, talk to crewmen on board the aircraft carrier U.S.S. Wasp after successful recovery of the Gemini 6 spacecraft. Note the cake with a model of the Gemini spacecraft in its center, which is positioned in front of the astronauts.

  19. Localization algorithms for micro-channel x-ray telescope on board SVOM space mission

    NASA Astrophysics Data System (ADS)

    Gosset, L.; Götz, D.; Osborne, J.; Willingale, R.

    2016-07-01

    SVOM is a French-Chinese space mission to be launched in 2021, whose goal is the study of Gamma-Ray Bursts, the most powerful stellar explosions in the Universe. The Micro-channel X-ray Telescope (MXT) is an X-ray focusing telescope, on board SVOM, with a field of view of 1 degree (working in the 0.2-10 keV energy band), dedicated to the rapid follow-up of the Gamma-Ray Bursts counterparts and to their precise localization (smaller than 2 arc minutes). In order to reduce the optics mass and to have an angular resolution of few arc minutes, a "lobster-Eye" configuration has been chosen. Using a numerical model of the MXT Point Spread Function (PSF) we simulated MXT observations of point sources in order to develop and test different localization algorithms to be implemented on board MXT. We included preliminary estimations of the instrumental and sky background. The algorithms on board have to be a combination of speed and precision (the brightest sources are expected to be localized at a precision better than 10 arc seconds in the MXT reference frame). We present the comparison between different methods such as barycentre, PSF fitting in one or two dimensions. The temporal performance of the algorithms is being tested using the X-ray afterglow data base of the XRT telescope on board the NASA Swift satellite.

  20. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  1. 76 FR 13121 - Electronic On-Board Recorders and Hours of Service Supporting Documents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF TRANSPORTATION Federal Motor Carrier Safety Administration 49 CFR Parts 385, 390, and 395 RIN 2126-AB20 Electronic On... requested that FMCSA extend the comment period for the Electronic On-Board Recorder and Hours of...

  2. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR... OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation 106—Rules for use of portable...

  3. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation No. 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND... SUPPLEMENTAL OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation No. 106—Rules for use...

  4. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation No. 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND... SUPPLEMENTAL OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation No. 106—Rules for use...

  5. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR... OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation 106—Rules for use of portable...

  6. 14 CFR Special Federal Aviation... - Rules for use of portable oxygen concentrator systems on board aircraft

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... concentrator systems on board aircraft Federal Special Federal Aviation Regulation 106 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR... OPERATIONS Pt. 121, SFAR No. 106 Special Federal Aviation Regulation 106—Rules for use of portable...

  7. Flight results of a new GEO infrared Earth sensor STD 15 on board TC2

    NASA Astrophysics Data System (ADS)

    Brunel, O.; Krebs, J. P.

    In the frame of the Telecom 2 and Hispasat spacecraft programs, Matra-Marconi Space (France) entrusted SODERN in 1988 for the development of an accurate version of the Earth sensor designed to operate on three axis stabilized satellites at geosynchronous altitude. This new sensor, called STD 15, is a versatile one directly derived from the STD 12 already used on board the SPOT/ERS satellites. Two STD 15 were launched for the first time in December 1991 on board the TC2-A satellite, followed by a second launch in April 1992 on board TC2-B and a third one in September 1992 on board HISPASAT 1A. After a few months of successful operation, the telemetry data have been analyzed by SODERN in order to draw conclusions about the STD 15 behavior. The aim of this paper is to descrbe the latest version of the sensor, then to carry out a preliminary analysis of the available in-flight data. After a short presentation of its operating principle and associated algorithms, the equipment is briefly described and its main features are shown with special attention to the error budget for the various operating modes: transfer orbit (classical or super-synchronous), Earth acquisition, antenna mapping attitude control and station keeping in GEO. Then the main performance results of on ground testing and computed theoretical results are discussed and compared with the in-flight results.

  8. 49 CFR 395.16 - Electronic on-board recording devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Section 395.16 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR... to use. This section applies to electronic on-board recording devices (EOBRs) used to record the driver's hours of service as specified by part 395. Motor carriers subject to a remedial directive...

  9. 49 CFR 176.98 - Stowage of hazardous materials on board barges.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Stowage of hazardous materials on board barges. 176.98 Section 176.98 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION HAZARDOUS MATERIALS REGULATIONS CARRIAGE BY VESSEL Special Requirements for Barges...

  10. 77 FR 7562 - Electronic On-Board Recorders and Hours of Service Supporting Documents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ... Federal Motor Carrier Safety Administration 49 CFR Parts 385, 390, and 395 RIN 2126-AB20 Electronic On... the Electronic On-Board Recorders and Hours of Service Supporting Documents rulemaking (EOBR 2) by... developing material to support this ] rulemaking, including technical specifications for EOBRs and...

  11. On-Board Cryosphere Change Detection With The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Chien, S.; Davies, A.; Tran, D.; Mazzoni, D.; Baker, V.; Dohm, J.; Ip, F.

    2006-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1 with the Hyperion hyper-spectral visible to short-wave infrared spectrometer. ASE science activities include autonomous monitoring of cryospheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. A cryosphere classification algorithm, developed with Support Vector Machine (SVM) machine learning techniques [1], replacing a manually derived classifier used in earlier operations [2], has been used in conjunction with on-board autonomous software application to execute over three hundred on-board scenarios in 2005 and early 2006, to detect and autonomously respond to sea ice break-up and formation, lake freeze and thaw, as well as the onset and melting of snow cover on land. This demonstrates an approach which could be applied to the monitoring of cryospheres on Earth and Mars as well as the search for dynamic activity on the icy moons of the outer Solar System. [1] Castano et al. (2006) Onboard classifiers for science event detection on a remote-sensing spacecraft, KDD '06, Aug 20-23 2006, Philadelphia, PA. [2] Doggett et al. (2006), Autonomous detection of cryospheric change with Hyperion on-board Earth Observing-1, Rem. Sens. Env., 101, 447-462.

  12. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the information specified in § 395.8(d) of this part. The support systems must also provide information concerning on-board system sensor failures and identification of edited data. Such support systems... recording device shall use such device to record the driver's hours of service. (b) Information...

  13. 77 FR 54651 - Study on the Use of Cell Phones On Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Cell Phones On Board Aircraft AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of...) directed the Administrator of the FAA to conduct a study on the impact of the use of cell phones for voice... Cell Phone Study Comments using any of the following methods: E-Mail: Send comments to...

  14. 75 FR 39629 - Use of One Additional Portable Oxygen Concentrator Device on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-12

    ... Federal Aviation Administration 14 CFR Part 121 RIN 2120-AJ77 Use of One Additional Portable Oxygen... Oxygen Concentrator Systems on Board Aircraft, to allow for the use of one additional portable oxygen... the traveling public in need of oxygen therapy. When this rule becomes effective, there will be...

  15. 77 FR 63217 - Use of Additional Portable Oxygen Concentrators on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-16

    ... Federal Aviation Administration 14 CFR Part 121 RIN 2120-AK18 Use of Additional Portable Oxygen...: This action amends the FAA's rules for permitting limited use of portable oxygen concentrator systems on board aircraft, to allow for the use of additional portable oxygen concentrator (POC) devices...

  16. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...'s overhead guard. When the overall height of the truck with forks in the lowered position is limited... 49 Transportation 2 2012-10-01 2012-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on...

  17. Development of On-Board Fluid Analysis for the Mining Industry - Final report

    SciTech Connect

    Pardini, Allan F.

    2005-08-16

    Pacific Northwest National Laboratory (PNNL: Operated by Battelle Memorial Institute for the Department of Energy) is working with the Department of Energy (DOE) to develop technology for the US mining industry. PNNL was awarded a three-year program to develop automated on-board/in-line or on-site oil analysis for the mining industry.

  18. 40 CFR 85.2231 - On-board diagnostic test equipment requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) CONTROL OF AIR POLLUTION FROM MOBILE SOURCES Emission Control System Performance Warranty Short Tests § 85.2231 On-board diagnostic test equipment requirements. (a) The test system interface to the vehicle shall include a plug that conforms to SAE J1962 “Diagnostic Connector.”...

  19. AGB Statement on Board Responsibility for the Oversight of Educational Quality

    ERIC Educational Resources Information Center

    Association of Governing Boards of Universities and Colleges, 2011

    2011-01-01

    This "Statement on Board Responsibility for the Oversight of Educational Quality," approved by the Board of Directors of the Association of Governing Boards (AGB) in March 2011, urges institutional administrators and governing boards to engage fully in this area of board responsibility. The seven principles in this statement offer suggestions to…

  20. On-Board File Management and Its Application in Flight Operations

    NASA Technical Reports Server (NTRS)

    Kuo, N.

    1998-01-01

    In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.

  1. Re-scheduling as a tool for the power management on board a spacecraft

    NASA Technical Reports Server (NTRS)

    Albasheer, Omar; Momoh, James A.

    1995-01-01

    The scheduling of events on board a spacecraft is based on forecast energy levels. The real time values of energy may not coincide with the forecast values; consequently, a dynamic revising to the allocation of power is needed. The re-scheduling is also needed for other reasons on board a spacecraft like the addition of new event which must be scheduled, or a failure of an event due to many different contingencies. This need of rescheduling is very important to the survivability of the spacecraft. In this presentation, a re-scheduling tool will be presented as a part of an overall scheme for the power management on board a spacecraft from the allocation of energy point of view. The overall scheme is based on the optimal use of energy available on board a spacecraft using expert systems combined with linear optimization techniques. The system will be able to schedule maximum number of events utilizing most energy available. The outcome is more events scheduled to share the operation cost of that spacecraft. The system will also be able to re-schedule in case of a contingency with minimal time and minimal disturbance of the original schedule. The end product is a fully integrated planning system capable of producing the right decisions in short time with less human error. The overall system will be presented with the re-scheduling algorithm discussed in detail, then the tests and results will be presented for validations.

  2. On-Board Fiber-Optic Network Architectures for Radar and Avionics Signal Distribution

    NASA Technical Reports Server (NTRS)

    Alam, Mohammad F.; Atiquzzaman, Mohammed; Duncan, Bradley B.; Nguyen, Hung; Kunath, Richard

    2000-01-01

    Continued progress in both civil and military avionics applications is overstressing the capabilities of existing radio-frequency (RF) communication networks based on coaxial cables on board modem aircrafts. Future avionics systems will require high-bandwidth on- board communication links that are lightweight, immune to electromagnetic interference, and highly reliable. Fiber optic communication technology can meet all these challenges in a cost-effective manner. Recently, digital fiber-optic communication systems, where a fiber-optic network acts like a local area network (LAN) for digital data communications, have become a topic of extensive research and development. Although a fiber-optic system can be designed to transport radio-frequency (RF) signals, the digital fiber-optic systems under development today are not capable of transporting microwave and millimeter-wave RF signals used in radar and avionics systems on board an aircraft. Recent advances in fiber optic technology, especially wavelength division multiplexing (WDM), has opened a number of possibilities for designing on-board fiber optic networks, including all-optical networks for radar and avionics RF signal distribution. In this paper, we investigate a number of different novel approaches for fiber-optic transmission of on-board VHF and UHF RF signals using commercial off-the-shelf (COTS) components. The relative merits and demerits of each architecture are discussed, and the suitability of each architecture for particular applications is pointed out. All-optical approaches show better performance than other traditional approaches in terms of signal-to-noise ratio, power consumption, and weight requirements.

  3. Production of High Performance Bioinspired Silk Fibers by Straining Flow Spinning.

    PubMed

    Madurga, Rodrigo; Gañán-Calvo, Alfonso M; Plaza, Gustavo R; Guinea, Gustavo V; Elices, Manuel; Pérez-Rigueiro, José

    2017-03-03

    In the last years, there has been an increasing interest in bioinspired approaches for different applications, including the spinning of high performance silk fibers. Bioinspired spinning is based on the natural spinning system of spiders and worms and requires combining changes in the chemical environment of the proteins with the application of mechanical stresses. Here we present the novel straining flow spinning (SFS) process and prove its ability to produce high performance fibers under mild, environmentally friendly conditions, from aqueous protein dopes. SFS is shown to be an extremely versatile technique which allows controlling a large number of processing parameters. This ample set of parameters allows fine-tuning the microstructure and mechanical behavior of the fibers, which opens the possibility of adapting the fibers to their intended uses.

  4. High Performance Schools Best Practices Manual. Volume I: Planning [and] Volume II: Design [and] Volume III: Criteria.

    ERIC Educational Resources Information Center

    Eley, Charles, Ed.

    This three-volume manual, focusing on California's K-12 public schools, presents guidelines for establishing schools that are healthy, comfortable, energy efficient, resource efficient, water efficient, secure, adaptable, and easy to operate and maintain. The first volume describes why high performance schools are important, what components are…

  5. The Type of Culture at a High Performance Schools and Low Performance School in the State of Kedah

    ERIC Educational Resources Information Center

    Daud, Yaakob; Raman, Arumugam; Don, Yahya; O. F., Mohd Sofian; Hussin, Fauzi

    2015-01-01

    This research aims to identify the type of culture at a High Performance School (HPS) and Low Performance School (LPS) in the state of Kedah. The research instrument used to measure the type of organizational culture was adapted from Organizational Culture Assessment Instrument (Cameron & Quinn, 2006) based on Competing Values Framework Quinn…

  6. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  7. Adaptive beamforming in a CDMA mobile satellite communications system

    NASA Technical Reports Server (NTRS)

    Munoz-Garcia, Samuel G.

    1993-01-01

    Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.

  8. The Process Guidelines for High-Performance Buildings

    SciTech Connect

    Grondzik, W.

    1999-07-01

    The Process Guidelines for High-Performance Buildings are a set of recommendations for the design and operation of efficient and effective commercial/institutional buildings. The Process Guidelines have been developed in a searchable database format and are intended to replace print documents that provide guidance for new building designs for the State of Florida and for the operation of existing State buildings. The Process Guidelines for High-Performance buildings reside on the World Wide Web and are publicly accessible. Contents may be accessed in a variety of ways to best suit the needs of the user. The Process Guidelines address the interests of a range of facilities professionals; are organized around the primary phases of building design, construction, and operation; and include content dealing with all major building systems. The Process Guidelines for High-Performance Buildings may be accessed through the ``Resources'' area of the edesign Web site: http://fcn.state.fl.us/fdi/edesign/resource/index.html.

  9. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  10. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  11. Real-Time On-Board Airborne Demonstration of High-Speed On-Board Data Processing for Science Instruments (HOPS)

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  12. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  13. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  14. Visualization and Data Analysis for High-Performance Computing

    SciTech Connect

    Sewell, Christopher Meyer

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  15. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  16. Advances in Experiment Design for High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Morelli, Engene A.

    1998-01-01

    A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.

  17. Surveillance study of vector species on board passenger ships, Risk factors related to infestations

    PubMed Central

    Mouchtouri, Varvara A; Anagnostopoulou, Rimma; Samanidou-Voyadjoglou, Anna; Theodoridou, Kalliopi; Hatzoglou, Chrissi; Kremastinou, Jenny; Hadjichristodoulou, Christos

    2008-01-01

    Background Passenger ships provide conditions suitable for the survival and growth of pest populations. Arthropods and rodents can gain access directly from the ships' open spaces, can be carried in shiploads, or can be found on humans or animals as ectoparasites. Vectors on board ships may contaminate stored foods, transmit illness on board, or, introduce diseases in new areas. Pest species, ship areas facilitating infestations, and different risk factors related to infestations were identified in 21 ferries. Methods 486 traps for insects and rodents were placed in 21 ferries. Archives of Public Health Authorities were reviewed to identify complaints regarding the presence of pest species on board ferries from 1994 to 2004. A detail questionnaire was used to collect data on ship characteristics and pest control practices. Results Eighteen ferries were infested with flies (85.7%), 11 with cockroaches (52.3%), three with bedbugs, and one with fleas. Other species had been found on board were ants, spiders, butterflies, beetles, and a lizard. A total of 431 Blattella germanica species were captured in 28 (9.96%) traps, and 84.2% of them were nymphs. One ship was highly infested. Cockroach infestation was negatively associated with ferries in which Hazard Analysis Critical Control Point system was applied to ensure food safety on board (Relative Risk, RR = 0.23, p = 0.03), and positively associated with ferries in which cockroaches were observed by crew (RR = 4.09, p = 0.007), no cockroach monitoring log was kept (RR = 5.00, p = 0.02), and pesticide sprays for domestic use were applied by crew (RR = 4.00, p = 0.05). Cockroach infested ships had higher age (p = 0.03). Neither rats nor mice were found on any ship, but three ferries had been infested with a rodent in the past. Conclusion Integrated pest control programs should include continuing monitoring for a variety of pest species in different ship locations; pest control measures should be more persistent in older

  18. Serum protein determination by high-performance gel-permeation chromatography.

    PubMed

    Hayakawa, K; Masuko, M; Mineta, M; Yoshikawa, K; Yamauchi, K; Hirano, M; Katsumata, N; Tanaka, T

    1997-08-15

    A general high-performance gel-permeation chromatography (HPGPC) method was developed to determine protein in human serum with improved sensitivity and speed. The optimum UV wavelength for protein detection was found to be 210 nm, by comparing the protein values obtained by varying the UV wavelength of the HPLC detection system with the protein values obtained from spectrophotometric protein assays, i.e., the bicinchoninic acid (BCA) method and the biuret method. The analysis time was less than 1 min. Since this HPGPC serum protein assay method is simple and rapid, it is expected to be particularly well adapted for use in clinical laboratories.

  19. Design of a new high-performance pointing controller for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1993-01-01

    A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.

  20. 24 CFR 902.71 - Incentives for high performers.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 4 2013-04-01 2013-04-01 false Incentives for high performers. 902.71 Section 902.71 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND...

  1. Seeking Solution: High-Performance Computing for Science. Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  2. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  3. National Best Practices Manual for Building High Performance Schools

    ERIC Educational Resources Information Center

    US Department of Energy, 2007

    2007-01-01

    The U.S. Department of Energy's Rebuild America EnergySmart Schools program provides school boards, administrators, and design staff with guidance to help make informed decisions about energy and environmental issues important to school systems and communities. "The National Best Practices Manual for Building High Performance Schools" is a part of…

  4. High Performance Skiing. How to Become a Better Alpine Skier.

    ERIC Educational Resources Information Center

    Yacenda, John

    This book is intended for people who desire to improve their skiing by exploring high performance techniques leading to: (1) more consistent performance; (2) less fatigue and more endurance; (3) greater strength and flexibility; (4) greater versatility; (5) greater confidence in all skiing conditions; and (6) the knowledge to participate in…

  5. The role of interpreters in high performance computing

    SciTech Connect

    Naumann, Axel; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  6. Recruiting, Training, and Retaining High-Performance Development Teams

    ERIC Educational Resources Information Center

    Elder, Stephen D.

    2010-01-01

    This chapter offers thoughts on some key elements of a high-performing development environment. The author describes how good development officers love to be part of something big, something that transforms a place and its people, and that thinking big is a powerful concept for development officers. He reminds development officers to be clear…

  7. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  8. High-Performance Liquid Chromatography-Mass Spectrometry.

    ERIC Educational Resources Information Center

    Vestal, Marvin L.

    1984-01-01

    Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…

  9. Understanding the Work and Learning of High Performance Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Cliff J.

    2012-01-01

    Background: The development of high performance sports coaches has been proposed as a major imperative in the professionalization of sports coaching. Accordingly, an increasing body of research is beginning to address the question of how coaches learn. While this is important work, an understanding of how coaches learn must be underpinned by an…

  10. Two Profiles of the Dutch High Performing Employee

    ERIC Educational Resources Information Center

    de Waal, A. A.; Oudshoorn, Michella

    2015-01-01

    Purpose: The purpose of this study is to explore the profile of an ideal employee, to be more precise the behavioral characteristics of the Dutch high-performing employee (HPE). Organizational performance depends for a large part on the commitment of employees. Employees provide their knowledge, skills, experiences and creativity to the…

  11. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  12. The Case for High-Performance, Healthy Green Schools

    ERIC Educational Resources Information Center

    Carter, Leesa

    2011-01-01

    When trying to reach their sustainability goals, schools and school districts often run into obstacles, including financing, training, and implementation tools. Last fall, the U.S. Green Building Council-Georgia (USGBC-Georgia) launched its High Performance, Healthy Schools (HPHS) Program to help Georgia schools overcome those obstacles. By…

  13. Training Needs for High Performance in the Automotive Industry.

    ERIC Educational Resources Information Center

    Clyne, Barry; And Others

    A project was conducted in Australia to identify the training needs of the emerging industry required to support the development of the high performance areas of the automotive machining and reconditioning field especially as it pertained to auto racing. Data were gathered through a literature search, interviews with experts in the field, and…

  14. Quantification of Tea Flavonoids by High Performance Liquid Chromatography

    ERIC Educational Resources Information Center

    Freeman, Jessica D.; Niemeyer, Emily D.

    2008-01-01

    We have developed a laboratory experiment that uses high performance liquid chromatography (HPLC) to quantify flavonoid levels in a variety of commercial teas. Specifically, this experiment analyzes a group of flavonoids known as catechins, plant-derived polyphenolic compounds commonly found in many foods and beverages, including green and black…

  15. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  16. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  17. Mallow carotenoids determined by high-performance liquid chromatography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mallow (corchorus olitorius) is a green vegetable, which is widely consumed either fresh or dry by Middle East population. This study was carried out to determine the contents of major carotenoids quantitatively in mallow, by using a High Performance Liquid Chromatography (HPLC) equipped with a Bis...

  18. Guide to School Design: Healthy + High Performance Schools

    ERIC Educational Resources Information Center

    Healthy Schools Network, Inc., 2007

    2007-01-01

    A "healthy and high performance school" uses a holistic design process to promote the health and comfort of children and school employees, as well as conserve resources. Children may spend over eight hours a day at school with little, if any, legal protection from environmental hazards. Schools are generally not well-maintained; asthma is a…

  19. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  20. Cobra Strikes! High-Performance Car Inspires Students, Markets Program

    ERIC Educational Resources Information Center

    Jenkins, Bonita

    2008-01-01

    Nestled in the Lower Piedmont region of upstate South Carolina, Piedmont Technical College (PTC) is one of 16 technical colleges in the state. Automotive technology is one of its most popular programs. The program features an instructive, motivating activity that the author describes in this article: building a high-performance car. The Cobra…

  1. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  2. Replicating High-Performing Public Schools: Lessons from the Field

    ERIC Educational Resources Information Center

    Wicoff, Kimberly; Howard, Don; Huggett, Jon

    2006-01-01

    The fact that far too many students leave high school unprepared to become contributing members of society is hardly news. What is news is the growing number of schools that are proving public education can work for every student. Unlike a decade ago, when it was hard to find more than a handful of high-performing public schools, today many such…

  3. Maintaining High-Performance Schools after Construction or Renovation

    ERIC Educational Resources Information Center

    Luepke, Gary; Ronsivalli, Louis J., Jr.

    2009-01-01

    With taxpayers' considerable investment in schools, it is critical for school districts to preserve their community's assets with new construction or renovation and effective facility maintenance programs. "High-performance" school buildings are designed to link the physical environment to positive student achievement while providing such benefits…

  4. Cray XMT Brings New Energy to High-Performance Computing

    SciTech Connect

    Chavarría-Miranda, Daniel; Gracio, Deborah K.; Marquez, Andres; Nieplocha, Jaroslaw; Scherrer, Chad; Sofia, Heidi J.

    2008-09-30

    The ability to solve our nation’s most challenging problems—whether it’s cleaning up the environment, finding alternative forms of energy or improving public health and safety—requires new scientific discoveries. High performance experimental and computational technologies from the past decade are helping to accelerate these scientific discoveries, but they introduce challenges of their own. The vastly increasing volumes and complexities of experimental and computational data pose significant challenges to traditional high-performance computing (HPC) platforms as terabytes to petabytes of data must be processed and analyzed. And the growing complexity of computer models that incorporate dynamic multiscale and multiphysics phenomena place enormous demands on high-performance computer architectures. Just as these new challenges are arising, the computer architecture world is experiencing a renaissance of innovation. The continuing march of Moore’s law has provided the opportunity to put more functionality on a chip, enabling the achievement of performance in new ways. Power limitations, however, will severely limit future growth in clock rates. The challenge will be to obtain greater utilization via some form of on-chip parallelism, but the complexities of emerging applications will require significant innovation in high-performance architectures. The Cray XMT, the successor to the Tera/Cray MTA, provides an alternative platform for addressing computations that stymie current HPC systems, holding the potential to substantially accelerate data analysis and predictive analytics for many complex challenges in energy, national security and fundamental science that traditional computing cannot do.

  5. A Research and Development Strategy for High Performance Computing.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…

  6. TMD-Based Structural Control of High Performance Steel Bridges

    NASA Astrophysics Data System (ADS)

    Kim, Tae Min; Kim, Gun; Kyum Kim, Moon

    2012-08-01

    The purpose of this study is to investigate the effectiveness of structural control using tuned mass damper (TMD) for suppressing excessive traffic induced vibration of high performance steel bridge. The study considered 1-span steel plate girder bridge and bridge-vehicle interaction using HS-24 truck model. A numerical model of steel plate girder, traffic load, and TMD is constructed and time history analysis is performed using commercial structural analysis program ABAQUS 6.10. Results from analyses show that high performance steel bridge has dynamic serviceability problem, compared to relatively low performance steel bridge. Therefore, the structural control using TMD is implemented in order to alleviate dynamic serviceability problems. TMD is applied to the bridge with high performance steel and then vertical vibration due to dynamic behavior is assessed again. In consequent, by using TMD, it is confirmed that the residual amplitude is appreciably reduced by 85% in steady-state vibration. Moreover, vibration serviceability assessment using 'Reiher-Meister Curve' is also remarkably improved. As a result, this paper provides the guideline for economical design of I-girder using high performance steel and evaluates the effectiveness of structural control using TMD, simultaneously.

  7. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Diagnostic and remedial methods concerning rotordynamic instability problems in high performance turbomachinery are discussed. Instabilities due to seal forces and work-fluid forces are identified along with those induced by rotor bearing systems. Several methods of rotordynamic control are described including active feedback methods, the use of elastometric elements, and the use of hydrodynamic journal bearings and supports.

  8. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  9. Mechanisms to create high performance pseudo-ductile composites

    NASA Astrophysics Data System (ADS)

    Wisnom, M. R.

    2016-07-01

    Current composites normally fail suddenly and catastrophically, which is an undesirable characteristic for many applications. This paper describes work as part of the High Performance Ductile Composite Technology programme (HiPerDuCT) on mechanisms to overcome this key limitation and introduce pseudo-ductility into the failure process.

  10. Changes in Plastid and Mitochondria Protein Expression in Arabidopsis Thaliana Callus on Board Chinese Spacecraft SZ-8

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Zheng, Hui Qiong

    2015-11-01

    Microgravity represents an adverse abiotic environment, which causes rearrangements in cellular organelles and changes in the energy metabolism of cells. Plastids and mitochondria are two subcellular energy organelles that are responsible for major metabolic processes, including photosynthesis, oxidative phosphorylation, ß-oxidation, and the tricarboxylic acid cycle. In our previous study performed on board the Chinese spacecraft SZ-8, we evaluated the global changes exerted by microgravity on the proteome of Arabidopsis thaliana cell cultures by comparing the microgravity-exposed samples with the controls either under 1 g centrifugation in space or 1 g ground conditions. Here, we report additional data from this space experiment that highlights the plastid and mitochondria proteins that responded to space flight conditions. We observed that 43 plastidial proteins and 50 mitochondrial proteins changed their abundances under microgravity in space. The major changes in both plastids and mitochondria involved proteins that functions in a suite of redox antioxidant and metabolic pathways. These results suggested that these antioxidant and metabolic changes in plastids and mitochondria could be important components of the adaptive strategy in plants subjected to microgravity in space.

  11. A ground-based memory state tracker for satellite on-board computer memory

    NASA Technical Reports Server (NTRS)

    Quan, Alan; Angelino, Robert; Hill, Michael; Schwuttke, Ursula; Hervias, Felipe

    1993-01-01

    The TOPEX/POSEIDON satellite, currently in Earth orbit, will use radar altimetry to measure sea surface height over 90 percent of the world's ice-free oceans. In combination with a precise determination of the spacecraft orbit, the altimetry data will provide maps of ocean topography, which will be used to calculate the speed and direction of ocean currents worldwide. NASA's Jet Propulsion Laboratory (JPL) has primary responsibility for mission operations for TOPEX/POSEIDON. Software applications have been developed to automate mission operations tasks. This paper describes one of these applications, the Memory State Tracker, which allows the ground analyst to examine and track the contents of satellite on-board computer memory quickly and efficiently, in a human-readable format, without having to receive the data directly from the spacecraft. This process is accomplished by maintaining a groundbased mirror-image of spacecraft On-board Computer memory.

  12. An on-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1982-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.

  13. Comparison of cosmic rays radiation detectors on-board commercial jet aircraft.

    PubMed

    Kubančák, Ján; Ambrožová, Iva; Brabcová, Kateřina Pachnerová; Jakůbek, Jan; Kyselová, Dagmar; Ploc, Ondřej; Bemš, Július; Štěpán, Václav; Uchihori, Yukio

    2015-06-01

    Aircrew members and passengers are exposed to increased rates of cosmic radiation on-board commercial jet aircraft. The annual effective doses of crew members often exceed limits for public, thus it is recommended to monitor them. In general, the doses are estimated via various computer codes and in some countries also verified by measurements. This paper describes a comparison of three cosmic rays detectors, namely of the (a) HAWK Tissue Equivalent Proportional Counter; (b) Liulin semiconductor energy deposit spectrometer and (c) TIMEPIX silicon semiconductor pixel detector, exposed to radiation fields on-board commercial Czech Airlines company jet aircraft. Measurements were performed during passenger flights from Prague to Madrid, Oslo, Tbilisi, Yekaterinburg and Almaty, and back in July and August 2011. For all flights, energy deposit spectra and absorbed doses are presented. Measured absorbed dose and dose equivalent are compared with the EPCARD code calculations. Finally, the advantages and disadvantages of all detectors are discussed.

  14. An Alternative Lunar Ephemeris Model for On-Board Flight Software Use

    NASA Technical Reports Server (NTRS)

    Simpson, David G.

    1998-01-01

    In calculating the position vector of the Moon in on-board flight software, one often begins by using a series expansion to calculate the ecliptic latitude and longitude of the Moon, referred to the mean ecliptic and equinox of date. One then performs a reduction for precession, followed by a rotation of the position vector from the ecliptic plane to the equator, and a transformation from spherical to Cartesian coordinates before finally arriving at the desired result: equatorial J2000 Cartesian components of the lunar position vector. An alternative method is developed here in which the equatorial J2000 Cartesian components of the lunar position vector are calculated directly by a series expansion, saving valuable on-board computer resources.

  15. Integrated extension board for on-board computer (OBDH) of SSETI ESEO satellite

    NASA Astrophysics Data System (ADS)

    Cichocki, Andrzej; Graczyk, Rafal

    2008-01-01

    This paper holds an information about an extension module for Single Board Computer (MIP405), which is the heart of On-board Data Handling Module (OBDH) of student Earth's microsatellite - SSETI ESEO. OBDH is a PC104 stack of four boards electrically connected and mechanically fixed. On-Board Computer is a key subsystem to the mission success - it is responsible for distribution of control signals to each module of the spacecraft. It is also expected to gather critical data for an appropriate mission progress, implementation of a part of algorithms used for satellite stabilization and orbit control and, at last, processing telecommands. Since whole system should meet spaceborne application requirements, it must be exceptionally reliable.

  16. Scientific goals achievable with radiation monitor measurements on board gravitational wave interferometers in space

    NASA Astrophysics Data System (ADS)

    Grimani, C.; Boatella, C.; Chmeissani, M.; Fabi, M.; Finetti, N.; Lobo, A.; Mateos, I.

    2012-06-01

    Cosmic rays and energetic solar particles constitute one of the most important sources of noise for future gravitational wave detectors in space. Radiation monitors were designed for the LISA Pathfinder (LISA-PF) mission. Similar devices were proposed to be placed on board LISA and ASTROD. These detectors are needed to monitor the flux of energetic particles penetrating mission spacecraft and inertial sensors. However, in addition to this primary use, radiation monitors on board space interferometers will carry out the first multipoint observation of solar energetic particles (SEPs) at small and large heliolongitude intervals and at very different distances from Earth with minor normalization errors. We illustrate the scientific goals that can be achieved in solar physics and space weather studies with these detectors. A comparison with present and future missions devoted to solar physics is presented.

  17. Power Converter Module of the PHI Experiment on Board of Solar Orbiter

    NASA Astrophysics Data System (ADS)

    Sanchis-Kilders, E.; Ferreres, A.; Gasent-Blesa, J. L.; Osorno, D.; Gilabert, D.; Maset, E.; Ejea, J. B.; Esteve, V.; Jordan, J.; Garrigos, A.; Blanes, J. M.

    2014-08-01

    Power converters for experiments that have to fly on board space missions (satellite, launchers, etc.) have very stringent requirements due to its use in a very harsh environment. The selection of a suitable topology is therefore not only based on standard requirements but in addition, more strict ones have also to be fulfilled. This work shows the design procedure followed to build the Power Converter Module (PCM) for the Polarimetric and Helioseismic Imager (SO/PHI), experiment on board the Solar Orbiter Satellite. The selected topology has been a Push-Pull, for a power level of approximately 35 W and with seven output voltages. Galvanic isolation is needed from primary to secondary, but not between each secondary. Coupled inductors among all outputs have allowed reducing cross regulation. The main design problems solved have been reduction of parasitic capacitance of magnetic elements and closed loop design when using peak current control due to coupling all output inductors together.

  18. On-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1983-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state.

  19. On-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1983-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three-dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Previously announced in STAR as N84-16116

  20. Monitoring snowpack properties by passive microwave sensors on board of aircraft and satellites

    NASA Technical Reports Server (NTRS)

    Chang, A. T. C.; Foster, J. L.; Hall, D. K.; Rango, A.

    1980-01-01

    Snowpack properties such as water equivalent and snow wetness may be inferred from variations in measured microwave brightness temperatures. This is because the emerged microwave radiation interacts directly with snow crystals within the snowpack. Using vertically and horizontally polarized brightness temperatures obtained from the multifrequency microwave radiometer (MFMR) on board a NASA research aircraft and the electrical scanning microwave radiometer (ESMR) and scanning multichannel microwave radiometer (SMMR) on board the Nimbus 5, 6, and 7 satellites, linear relationships between snow depth or water equivalent and microwave brightness temperature were developed. The presence of melt water in the snowpack generally increases the brightness temperatures, which can be used to predict snowpack priming and timing of runoff.

  1. Investigations of doses on board commercial passenger aircraft using CR-39 and thermoluminescent detectors.

    PubMed

    Horwacik, T; Bilski, P; Olko, P; Spurny, F; Turek, K

    2004-01-01

    Measurements of cosmic radiation dose rates (from the neutron and the non-neutron components) on board passenger aircraft were performed using environmental packages with thermoluminescent TL and CR-39 etched track detectors. The packages were calibrated at the CERN-EU high-energy Reference Field Facility and evaluated at the Institute of Nuclear Physics in Krakow (TL + CR-39) and at the German Aerospace Centre in Cologne (CR-39). Detector packages were exposed on board passenger aircraft operated by LOT Polish Airlines, flown between February and May 2001. The values of effective dose rate determined, averaged over the measuring period, ranged between 2.9 and 4.4 microSv h(-1). The results of environmental measurements agreed to within 10% with values calculated from the CARI-6 code.

  2. STS-33 MS Carter and MS Thornton display 'Maggot on Board' sign and candy

    NASA Technical Reports Server (NTRS)

    1989-01-01

    STS-33 Mission Specialist (MS) Manley L. Carter, Jr (left) and MS Kathryn C. Thornton display 'Maggot on Board' sign and 'SMARTIES' candy stored in plastic bag on the aft flight deck of Discovery, Orbiter Vehicle (OV) 103. The mission specialists are wearing their mission polo shirts and communications kit assembly headsets. An overhead window appears above their heads. A gold necklace chain floats around Carter's neck.

  3. Lessons learned with MS-DRGs: getting physicians on board for success.

    PubMed

    Didier, Donna; Pace, MaryAnne; Walker, William V

    2008-08-01

    To get physicians on board with a clinical documentation program: Explain what the government is trying to accomplish with changes to the prospective payment system; Connect codes and quality report cards; Enlist a physician champion; Conduct an MS-DRG financial impact study and share the results with physicians; Establish a clinical documentation improvement/integrity team; Provide solid rationale and data to back up requests.

  4. Spacecraft drag-free technology development: On-board estimation and control synthesis

    NASA Technical Reports Server (NTRS)

    Key, R. W.; Mettler, E.; Milman, M. H.; Schaechter, D. B.

    1982-01-01

    Estimation and control methods for a Drag-Free spacecraft are discussed. The functional and analytical synthesis of on-board estimators and controllers for an integrated attitude and translation control system is represented. The framework for detail definition and design of the baseline drag-free system is created. The techniques for solution of self-gravity and electrostatic charging problems are applicable generally, as is the control system development.

  5. On-board processor for direct distribution of change detection data products

    NASA Technical Reports Server (NTRS)

    Lou, Yunling; Hensley, Scott; Le, Charles; Moller, Delwyn

    2004-01-01

    We are developing an on-board imagin radar data processor for repeat-pass change detection and hazards management. This is the enabling technology for NASA ESE to utilize imaging radars. This processor will enable the observation and use of surface deformation data over rapidly evolving natural hazards, both as an aid to scientific understanding ad to provide timely data to agencies responsible for the management and mitigation of natural disasters.

  6. High-Speed On-Board Data Processing for Science Instruments

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace

    2014-01-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.

  7. High-Speed on-Board Data Processing for Science Instruments

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Lin, B.

    2014-12-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented. The project is called High-Speed OnBoard Data Processing for Science Instruments (HOPS) and focuses on an air/space-borne high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this presentation.

  8. High-speed on-board data processing for science instruments

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace

    2014-06-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.

  9. Fabrication and Qualification of Coated Chip-on-Board Technology for Miniaturized Space Systems

    NASA Technical Reports Server (NTRS)

    Maurer, R. H.; Le, B. Q.; Nhan, E.; Lew, A. L.; Darrin, M. Ann Garrison

    1997-01-01

    The results of a study carried out in order to manufacture and verify the quality of chip-on-board (COB) packaging technology are presented. The COB, designed for space applications, was tested under environmental stresses, temperature cycling, and temperature-humidity-bias. Both robustness in space applications and in environmental protection on the ground-complete reliability without hermeticity were searched for. The epoxy-parylene combinations proved to be superior to other materials tested.

  10. Increasing the object recognition distance of compact open air on board vision system

    NASA Astrophysics Data System (ADS)

    Kirillov, Sergey; Kostkin, Ivan; Strotov, Valery; Dmitriev, Vladimir; Berdnikov, Vadim; Akopov, Eduard; Elyutin, Aleksey

    2016-10-01

    The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.

  11. An Intelligent System for Monitoring the Microgravity Environment Quality On-Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Lin, Paul P.; Jules, Kenol

    2002-01-01

    An intelligent system for monitoring the microgravity environment quality on-board the International Space Station is presented. The monitoring system uses a new approach combining Kohonen's self-organizing feature map, learning vector quantization, and back propagation neural network to recognize and classify the known and unknown patterns. Finally, fuzzy logic is used to assess the level of confidence associated with each vibrating source activation detected by the system.

  12. WDM package enabling high-bandwidth optical intrasystem interconnects for high-performance computer systems

    NASA Astrophysics Data System (ADS)

    Schrage, J.; Soenmez, Y.; Happel, T.; Gubler, U.; Lukowicz, P.; Mrozynski, G.

    2006-02-01

    From long haul, metro access and intersystem links the trend goes to applying optical interconnection technology at increasingly shorter distances. Intrasystem interconnects such as data busses between microprocessors and memory blocks are still based on copper interconnects today. This causes a bottleneck in computer systems since the achievable bandwidth of electrical interconnects is limited through the underlying physical properties. Approaches to solve this problem by embedding optical multimode polymer waveguides into the board (electro-optical circuit board technology, EOCB) have been reported earlier. The principle feasibility of optical interconnection technology in chip-to-chip applications has been validated in a number of projects. For reasons of cost considerations waveguides with large cross sections are used in order to relax alignment requirements and to allow automatic placement and assembly without any active alignment of components necessary. On the other hand the bandwidth of these highly multimodal waveguides is restricted due to mode dispersion. The advance of WDM technology towards intrasystem applications will provide sufficiently high bandwidth which is required for future high-performance computer systems: Assuming that, for example, 8 wavelength-channels with 12Gbps (SDR1) each are given, then optical on-board interconnects with data rates a magnitude higher than the data rates of electrical interconnects for distances typically found at today's computer boards and backplanes can be realized. The data rate will be twice as much, if DDR2 technology is considered towards the optical signals as well. In this paper we discuss an approach for a hybrid integrated optoelectronic WDM package which might enable the application of WDM technology to EOCB.

  13. Noise and sleep on board vessels in the Royal Norwegian Navy

    PubMed Central

    Sunde, Erlend; Bråtveit, Magne; Pallesen, Ståle; Moen, Bente Elisabeth

    2016-01-01

    Previous research indicates that exposure to noise during sleep can cause sleep disturbance. Seamen on board vessels are frequently exposed to noise also during sleep periods, and studies have reported sleep disturbance in this occupational group. However, studies of noise and sleep in maritime settings are few. This study's aim was to examine the associations between noise exposure during sleep, and sleep variables derived from actigraphy among seamen on board vessels in the Royal Norwegian Navy (RNoN). Data were collected on board 21 RNoN vessels, where navy seamen participated by wearing an actiwatch (actigraph), and by completing a questionnaire comprising information on gender, age, coffee drinking, nicotine use, use of medication, and workload. Noise dose meters were used to assess noise exposure inside the seamen's cabin during sleep. Eighty-three sleep periods from 68 seamen were included in the statistical analysis. Linear mixed-effects models were used to examine the association between noise exposure and the sleep variables percentage mobility during sleep and sleep efficiency, respectively. Noise exposure variables, coffee drinking status, nicotine use status, and sleeping hours explained 24.9% of the total variance in percentage mobility during sleep, and noise exposure variables explained 12.0% of the total variance in sleep efficiency. Equivalent noise level and number of noise events per hour were both associated with increased percentage mobility during sleep, and the number of noise events was associated with decreased sleep efficiency. PMID:26960785

  14. [Flight and altitude medicine for anesthetists-part 3: emergencies on board commercial aircraft].

    PubMed

    Graf, Jürgen; Stüben, Uwe; Pump, Stefan

    2013-04-01

    The demographic trend of industrialized societies is also reflected in commercial airlines' passengers: passengers are older nowadays and long-haul flights are routine mode of transport despite considerable chronic and acute medical conditions. Moreover, duration of non-stop flight routes and the number of passengers on board increase. Thus, the probability of a medical incident during a particular flight event increases, too.Due to international regulations minimum standards for medical equipment on board, and first aid training of the crews are set. However, it is often difficult to assess whether a stopover at a nearby airport can improve the medical care of a critically ill passenger. Besides flight operations and technical aspects, the medical infrastructure on the ground has to be considered carefully.Regardless of the amount of experience of a physician medical emergencies on board an aircraft usually represent a particular challenge. This is mainly due to the unfamiliar surroundings, the characteristics of the cabin atmosphere, the often existing cultural and language barriers and legal liability concerns.

  15. Noise and sleep on board vessels in the Royal Norwegian Navy.

    PubMed

    Sunde, Erlend; Bratveit, Magne; Pallesen, Stale; Moen, Bente Elisabeth

    2016-01-01

    Previous research indicates that exposure to noise during sleep can cause sleep disturbance. Seamen on board vessels are frequently exposed to noise also during sleep periods, and studies have reported sleep disturbance in this occupational group. However, studies of noise and sleep in maritime settings are few. This study's aim was to examine the associations between noise exposure during sleep, and sleep variables derived from actigraphy among seamen on board vessels in the Royal Norwegian Navy (RNoN). Data were collected on board 21 RNoN vessels, where navy seamen participated by wearing an actiwatch (actigraph), and by completing a questionnaire comprising information on gender, age, coffee drinking, nicotine use, use of medication, and workload. Noise dose meters were used to assess noise exposure inside the seamen's cabin during sleep. Eighty-three sleep periods from 68 seamen were included in the statistical analysis. Linear mixed-effects models were used to examine the association between noise exposure and the sleep variables percentage mobility during sleep and sleep efficiency, respectively. Noise exposure variables, coffee drinking status, nicotine use status, and sleeping hours explained 24.9% of the total variance in percentage mobility during sleep, and noise exposure variables explained 12.0% of the total variance in sleep efficiency. Equivalent noise level and number of noise events per hour were both associated with increased percentage mobility during sleep, and the number of noise events was associated with decreased sleep efficiency.

  16. Component Based Engineering and Multi-Platform Deployment for Nanosatellite On-Board Software

    NASA Astrophysics Data System (ADS)

    Polo, Oscar R.; Parra, Pablo; Knobluch, Martin; Garcia, Ignacio; Fernandez, Javier; Sanchez, Sebastian; Angulo, Manuel

    2012-08-01

    Nanosatellite on-board software development risks can be mitigated by means of component based software engineering techniques. Component based modelling makes easy the design patterns reuse and incremental development process, and its adoption can reduce significantly the deliverable time and error rate. This technique is optimally combined with automatic code generation in order to assure the coherency between the model and the implemented system. This paper introduces the component base modelling and automatic code generation of Nanosat-1B on-board software.Nanosat-1B is a scientific nanosatellite developed by the Spanish National Institute of Aerospace Technology (INTA) that was launched on July 09. The paper describes the UML2 diagrams used for specifying the system components, their interfaces and behaviour, emphasizing on their reuse possibilities on the same domain and how it facilitates the software maintenance after the satellite’s launch. It also introduces the main characteristics of the EDROOM tool used for Nanosat-1B component based software modelling, by means of UML2 diagrams, and embedded C++ code generation. Finally, the paper describes how the on-board software is integrated in a framework, called MICOBS, that empowers the multi- platform approach required for the system prototype’s evolution and validation over different targets.

  17. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  18. On-board removal of CO and other impurities in hydrogen for PEM fuel cell applications

    NASA Astrophysics Data System (ADS)

    Huang, Cunping; Jiang, Ruichun; Elbaccouch, Mohamed; Muradov, Nazim; Fenton, James M.

    Carbon monoxide (CO) in the hydrogen (H 2) stream can cause severe performance degradation for an H 2 polymer electrolyte membrane (PEM) fuel cell. The on-board removal of CO from an H 2 stream requires a process temperature less than 80 °C, and a fast reaction rate in order to minimize the reactor volume. At the present time, few technologies have been developed that meet these two requirements. This paper describes a concept of electrochemical water gas shift (EWGS) process to remove low concentration CO under ambient conditions for on-board applications. No on-board oxygen or air supply is needed for CO oxidation. Experimental work has been carried out to prove the concept of EWGS and the results indicate that the process can completely remove low level CO and improve the performance of a PEM fuel cell to the level of a pure H 2 stream. Because the EWGS electrolyzer can be modified from a humidifier for a PEM fuel cell system, no additional device is needed for the CO removal. More experimental data are needed to determine the rate of CO electrochemical removal and to explore the mechanism of the proposed process.

  19. University of the seas, 15 years of oceanographic schools on board of the Marion Dufresne

    NASA Astrophysics Data System (ADS)

    Malaize, Bruno; Deverchere, Jacques; Leau, Hélène; Graindorge, David

    2015-04-01

    Since the first University at Sea, proposed by two French Universities (Brest and Bordeaux) in 1999, the R/V Marion Dufresne, in collaboration with the French Polar institute (IPEV), has welcome 12 oceanographic schools. The main objective of this educational and scientific program is to stimulate the potential interest of highly graduated students in scientific fields dealing with oceanography, and to broaden exchanges with foreign universities, strengthening a pool of excellence at a high international scientific level. It is a unique opportunity for the students to discover and to be involved in the work in progress of collecting scientific data on board of a ship, and to attend international research courses given by scientists involved in the cruise program. They also experience the final task of the scientific work by presenting their own training results, making posters on board, and writing a cruise report. For some University at Sea, students had also updated a daily journal, available on internet, hosted by the main institutions involved (as IPEV or EPOC, Bordeaux University). All this work is done in English, a common language to all the participants. An overview of these 15 years background experience will be presented, underlying the financial supports used, the logistic on board, as well as all the benefits acquiered by all former students, now in permanent positions in different international institutions.

  20. Validation of On-board Cloud Cover Assessment Using EO-1

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Miller, Jerry; Griffin, Michael; Burke, Hsiao-hua

    2003-01-01

    The purpose of this NASA Earth Science Technology Office funded effort was to flight validate an on-board cloud detection algorithm and to determine the performance that can be achieved with a Mongoose V flight computer. This validation was performed on the EO-1 satellite, which is operational, by uploading new flight code to perform the cloud detection. The algorithm was developed by MIT/Lincoln Lab and is based on the use of the Hyperion hyperspectral instrument using selected spectral bands from 0.4 to 2.5 microns. The Technology Readiness Level (TRL) of this technology at the beginning of the task was level 5 and was TRL 6 upon completion. In the final validation, an 8 second (0.75 Gbytes) Hyperion image was processed on-board and assessed for percentage cloud cover within 30 minutes. It was expected to take many hours and perhaps a day considering that the Mongoose V is only a 6-8 MIP machine in performance. To accomplish this test, the image taken had to have level 0 and level 1 processing performed on-board before the cloud algorithm was applied. For almost all of the ground test cases and all of the flight cases, the cloud assessment was within 5% of the correct value and in most cases within 1-2%.

  1. Risk Mitigation for the Development of the New Ariane 5 On-Board Computer

    NASA Astrophysics Data System (ADS)

    Stransky, Arnaud; Chevalier, Laurent; Dubuc, Francois; Conde-Reis, Alain; Ledoux, Alain; Miramont, Philippe; Johansson, Leif

    2010-08-01

    In the frame of the Ariane 5 production, some equipment will become obsolete and need to be redesigned and redeveloped. This is the case for the On-Board Computer, which has to be completely redesigned and re-qualified by RUAG Space, as well as all its on-board software and associated development tools by ASTRIUM ST. This paper presents this obsolescence treatment, which has started in 2007 under an ESA contract, in the frame of ACEP and ARTA accompaniment programmes, and is very critical in technical term but also from schedule point of view: it gives the context and overall development plan, and details the risk mitigation actions agreed with ESA, especially those related to the development of the input/output ASIC, and also the on-board software porting and revalidation strategy. The efficiency of these risk mitigation actions has been proven by the outcome schedule; this development constitutes an up-to-date case for good practices, including some experience report and feedback for future other developments.

  2. Stationary and on-board storage systems to enhance energy and cost efficiency of tramways

    NASA Astrophysics Data System (ADS)

    Ceraolo, M.; Lutzemberger, G.

    2014-10-01

    Nowadays road transportation contributes in a large amount to the urban pollution and greenhouse gas emissions. One solution in urban environment, also in order to mitigate the effects of traffic jams, is the use of tramways. The most important bonus comes from the inherent reversibility of electric drives: energy can be sent back to the electricity source, while braking the vehicle. This can be done installing some storage device on-board trains, or in one or more points of the supply network. This paper analyses and compares the following variants: Stationary high-power lithium batteries. Stationary supercapacitors. High-power lithium batteries on-board trains. Supercapacitors on-board trains. When the storage system is constituted by a supercapacitor stack, it is mandatory to interpose between it and the line a DC/DC converter. On the contrary, the presence of the converter can be avoided, in case of lithium battery pack. This paper will make an evaluation of all these configurations, in a realistic case study, together with a cost/benefit analysis.

  3. Is there still a benefit to operate appendiceal abscess on board French nuclear submarines?

    PubMed

    Hornez, Emmanuel; Gellie, Gabriel; Entine, Fabrice; Ottomani, Sébastien; Monchal, Tristan; Meusnier, François; Platel, Jean Philippe; de Carbonnieres, Hubert; Thouard, Hervé

    2009-08-01

    Appendicular abscess occurred in 14.2% of patients presenting acute appendicitis. Management of these patients remains controversial, ranging from an emergency appendectomy to a nonoperative treatment. On board French nuclear submarines, the usual treatment for all cases of appendiceal masses, including both appendicitis and appendiceal abscess, is an appendectomy. In the past 5 years, the introduction of ultrasonography (US) on board has enabled the diagnosis of appendiceal abscess with a high rate of accuracy, and the latest studies show that nonoperative treatment is an alternative approach. This nonsurgical treatment, based on intravenous administration of antibiotics, is successful in about 93% of the patients. Failure of nonsurgical treatment is a reliable indication of percutaneous drainage. The proportion of adult patients who need percutaneous drainage of abscesses is about 27%. A successful primary nonoperative treatment may or may not be followed by interval appendectomy at the conclusion of the patrol. Nonsurgical treatment is associated with a significantly lower morbidity than surgery. Considering that the on-board surgical facility is limited, nonsurgical treatment appears to be the best approach for treating a sailor with an appendiceal abscess during a submarine patrol mission.

  4. SWAP: an EUV imager for solar monitoring on board of PROBA2

    NASA Astrophysics Data System (ADS)

    Katsiyannis, Athanassios C.; Berghmans, David; Hochedez, Jean-Francois; Nicula, Bogdan; Lawrence, Gareth; Defise, Jean-Marc; Ben-Moussa, Ali; Delouille, Veronique; Dominique, Marie; Lecat, Jean-Herve; Schmutz, W.; Theissen, Armin; Slemzin, Vladimir

    2005-08-01

    PROBA2 is an ESA technology demonstration mission to be launched in early 2007. The two primary scientific instruments on board of PROBA2 are SWAP (Sun Watcher using Active Pixel System detector and Image Processing) and the LYRA VUV radiometer. SWAP provides a full disk solar imaging capability with a bandpass filter centred at 17.5 nm (FeIX-XI) and a fast cadence of ≈1 min. The telescope is based on an off-axis Ritchey Chretien design while an extreme ultraviolet (EUV) enhanced APS CMOS will be used as a detector. As the prime goal of the SWAP is solar monitoring and advance warning of Coronal Mass Ejections (CME), on-board intellige nce will be implemented. Image recognition software using experimental algorithms will be used to detect CMEs during the first phase of eruption so the event can be tracked by the spacecraft without huma n intervention. LYRA will monitor solar irradiance in four different VUV passbands with a cadence of up to 100 Hz. The four channels were chosen for their relevance to solar physics, aeronomy and space weather: 115-125 nm (Lyman-α), 200-220 nm Herzberg continuum, the 17-70 nm Aluminium filter channel (that includes the HeII line at 30.4 nm) and the 1-20 nm Zirconium filter channel. On-board calibration sources will monitor the stability of the detectors and the filters throughout the duration of the mission.

  5. Blue water nursing: the role of Navy nurses on board US Navy combatant ships.

    PubMed

    McLarnon, Colleen O; Wise, Jamie H

    2003-06-01

    The independent and autonomous nature of blue water nursing makes its practice exceptionally demanding, exciting, and rewarding for Navy nurses assigned to US Navy combatant ships. Whether serving on board an aircraft carrier or an amphibious assault ship, the lone critical care nurse is responsible for a diverse mix of nursing care ranging from community health to critical care. Delivering this care in an austere shipboard environment at sea, often in isolated locations without regular available support from shore-based medical facilities, defines the blue water aspect of this nursing practice. Due to this type of setting and the limited critical care nursing presence on board, the greatest professional challenge at sea is care of the critically ill and injured. Leadership and teamwork are essential. Navy Corpsmen play an integral role in the delivery of nursing care, and the nurse relies heavily on them. Because the nurse cannot do it all alone, taking the lead on providing training and clinical guidance to the Corpsmen is a key responsibility of the shipboard nurse. In this assignment, the critical care nurse also has a unique opportunity to make an impact the health and welfare of the ship's entire crew through wellness and prevention programs. Considering the personal and professional challenges and rewards of blue water nursing, most Navy nurses describe their tour of duty on board a combatant ship as the best assignment of their Navy career.

  6. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei

    2007-02-01

    On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No

  7. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation.

    PubMed

    Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei

    2007-02-07

    On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No

  8. Dynamic neural networks based on-line identification and control of high performance motor drives

    NASA Technical Reports Server (NTRS)

    Rubaai, Ahmed; Kotaru, Raj

    1995-01-01

    In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.

  9. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  10. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  11. Survey of manufacturers of high-performance heat engines adaptable to solar applications

    NASA Technical Reports Server (NTRS)

    Stine, W. B.

    1984-01-01

    The results of an industry survey made during the summer of 1983 are summarized. The survey was initiated in order to develop an information base on advanced engines that could be used in the solar thermal dish-electric program. Questionnaires inviting responses were sent to 39 companies known to manufacture or integrate externally heated engines. Follow-up telephone communication ensured uniformity of response. It appears from the survey that the technology exists to produce external-heat-addition engines of appropriate size with thermal efficiencies of over 40%. Problem areas are materials and sealing.

  12. Building-Wide, Adaptive Energy Management Systems for High-Performance Buildings: Final CRADA Report

    SciTech Connect

    Zavala, Victor M.

    2016-10-27

    Development and field demonstration of the minimum ratio policy for occupancy-driven, predictive control of outdoor air ventilation. Technology transfer of Argonne’s methods for occupancy estimation and forecasting and for M&V to BuildingIQ for their deployment. Selection of CO2 sensing as the currently best-available technology for occupancy-driven controls. Accelerated restart capability for the commercial BuildingIQ system using horizon shifting strategies applied to receding horizon optimal control problems. Empirical-based evidence of 30% chilled water energy savings and 22% total HVAC energy savings achievable with the BuildingIQ system operating in the APS Office Building on-site at Argonne.

  13. GPU-based High-Performance Computing for Radiation Therapy

    PubMed Central

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639

  14. Materials integration issues for high performance fusion power systems.

    SciTech Connect

    Smith, D. L.

    1998-01-14

    One of the primary requirements for the development of fusion as an energy source is the qualification of materials for the frost wall/blanket system that will provide high performance and exhibit favorable safety and environmental features. Both economic competitiveness and the environmental attractiveness of fusion will be strongly influenced by the materials constraints. A key aspect is the development of a compatible combination of materials for the various functions of structure, tritium breeding, coolant, neutron multiplication and other special requirements for a specific system. This paper presents an overview of key materials integration issues for high performance fusion power systems. Issues such as: chemical compatibility of structure and coolant, hydrogen/tritium interactions with the plasma facing/structure/breeder materials, thermomechanical constraints associated with coolant/structure, thermal-hydraulic requirements, and safety/environmental considerations from a systems viewpoint are presented. The major materials interactions for leading blanket concepts are discussed.

  15. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2006-05-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  16. High performance computing and communications: FY 1996 implementation plan

    SciTech Connect

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  17. The Deformability of a High Performance Concrete (HPC)

    NASA Astrophysics Data System (ADS)

    Benamara, Dalila; Mezghiche, Bouzidi; Zohra, Mechrouh Fatma

    The current tendency in the world is to find new materials at lower cost which can guarantee better performances during their incorporations in the concretes. Our study lies within the scope of the valorization of local materials. Among these materials we find the high performance concrete, which has become the object of the several researchers for a few years. This study consists the development and the mechanical and elastic properties of a concrete with high performances (HPC) starting from materials existing on the Algerian market. Three mineral additions: limestone, the sand of dune and the waste of polishing of tiling are incorporated a cement with various contents (5%, 10%, 15% and 20%). instead of the fume of silica or fly-ashes.

  18. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  19. GPU-based high-performance computing for radiation therapy.

    PubMed

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  20. Micromachined high-performance RF passives in CMOS substrate

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Ni, Zao; Gu, Lei; Wu, Zhengzheng; Yang, Chen

    2016-11-01

    This review systematically addresses the micromachining technologies used for the fabrication of high-performance radio-frequency (RF) passives that can be integrated into low-cost complementary metal-oxide semiconductor (CMOS)-grade (i.e. low-resistivity) silicon wafers. With the development of various kinds of post-CMOS-compatible microelectromechanical systems (MEMS) processes, 3D structural inductors/transformers, variable capacitors, tunable resonators and band-pass/low-pass filters can be compatibly integrated into active integrated circuits to form monolithic RF system-on-chips. By using MEMS processes, including substrate modifying/suspending and LIGA-like metal electroplating, both the highly lossy substrate effect and the resistive loss can be largely eliminated and depressed, thereby meeting the high-performance requirements of telecommunication applications.