Science.gov

Sample records for adaptable high-performance on-board

  1. Low cost and high performance on-board computer for picosatellite

    NASA Astrophysics Data System (ADS)

    Rajkowski, T.; Graczyk, R.; Palau, M. C.; Orleański, P.

    2012-05-01

    This work presents new design of an on-board computer utilizing COTS, non-space qualified components. Common attributes of already used computers for pico- and nanosatellites are presented as well, and the need for new solutions of on-board computer for such satellites (concentrating on CubeSat satellites) is explained. Requirements for electronic devices which are sent to low Earth orbit in CubeSats are described widely. Finally, first version of architecture of onboard computer for CubeSat is presented (PICARD project - Picosatellite Computer Architecture Development). Computer utilizes two processing units - primary: low power unit (ATmega128 microcontroller) and secondary: high performance unit (Spartan-6, SRAM FPGA). Basic features of these devices are presented, clarifying the choice of these units to the project.

  2. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  3. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  4. On-board multispectral classification study. Volume 2: Supplementary tasks. [adaptive control

    NASA Technical Reports Server (NTRS)

    Ewalt, D.

    1979-01-01

    The operational tasks of the onboard multispectral classification study were defined. These tasks include: sensing characteristics for future space applications; information adaptive systems architectural approaches; data set selection criteria; and onboard functional requirements for interfacing with global positioning satellites.

  5. Adaptive approach for on-board impedance parameters and voltage estimation of lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Farmann, Alexander; Waag, Wladislaw; Sauer, Dirk Uwe

    2015-12-01

    Robust algorithms using reduced order equivalent circuit model (ECM) for an accurate and reliable estimation of battery states in various applications become more popular. In this study, a novel adaptive, self-learning heuristic algorithm for on-board impedance parameters and voltage estimation of lithium-ion batteries (LIBs) in electric vehicles is introduced. The presented approach is verified using LIBs with different composition of chemistries (NMC/C, NMC/LTO, LFP/C) at different aging states. An impedance-based reduced order ECM incorporating ohmic resistance and a combination of a constant phase element and a resistance (so-called ZARC-element) is employed. Existing algorithms in vehicles are much more limited in the complexity of the ECMs. The algorithm is validated using seven day real vehicle data with high temperature variation including very low temperatures (from -20 °C to +30 °C) at different Depth-of-Discharges (DoDs). Two possibilities to approximate both ZARC-elements with finite number of RC-elements on-board are shown and the results of the voltage estimation are compared. Moreover, the current dependence of the charge-transfer resistance is considered by employing Butler-Volmer equation. Achieved results indicate that both models yield almost the same grade of accuracy.

  6. Reconfigurable and adaptive photonic networks for high-performance computing systems.

    PubMed

    Kodi, Avinash; Louri, Ahmed

    2009-08-01

    As feature sizes decrease to the submicrometer regime and clock rates increase to the multigigahertz range, the limited bandwidth at higher bit rates and longer communication distances in electrical interconnects will create a major bandwidth imbalance in future high-performance computing (HPC) systems. We explore the application of an optoelectronic interconnect for the design of flexible, high-bandwidth, reconfigurable and adaptive interconnection architectures for chip-to-chip and board-to-board HPC systems. Reconfigurability is realized by interconnecting arrays of optical transmitters, and adaptivity is implemented by a dynamic bandwidth reallocation (DBR) technique that balances the load on each communication channel. We evaluate a DBR technique, the lockstep (LS) protocol, that monitors traffic intensities, reallocates bandwidth, and adapts to changes in communication patterns. We incorporate this DBR technique into a detailed discrete-event network simulator to evaluate the performance for uniform, nonuniform, and permutation communication patterns. Simulation results indicate that, without reconfiguration techniques being applied, optical based system architecture shows better performance than electrical interconnects for uniform and nonuniform patterns; with reconfiguration techniques being applied, the dynamically reconfigurable optoelectronic interconnect provides much better performance for all communication patterns. Based on the performance study, the reconfigured architecture shows 30%-50% increased throughput and 50%-75% reduced network latency compared with HPC electrical networks. PMID:19649024

  7. An Adaptive Intelligent Integrated Lighting Control Approach for High-Performance Office Buildings

    NASA Astrophysics Data System (ADS)

    Karizi, Nasim

    An acute and crucial societal problem is the energy consumed in existing commercial buildings. There are 1.5 million commercial buildings in the U.S. with only about 3% being built each year. Hence, existing buildings need to be properly operated and maintained for several decades. Application of integrated centralized control systems in buildings could lead to more than 50% energy savings. This research work demonstrates an innovative adaptive integrated lighting control approach which could achieve significant energy savings and increase indoor comfort in high performance office buildings. In the first phase of the study, a predictive algorithm was developed and validated through experiments in an actual test room. The objective was to regulate daylight on a specified work plane by controlling the blind slat angles. Furthermore, a sensor-based integrated adaptive lighting controller was designed in Simulink which included an innovative sensor optimization approach based on genetic algorithm to minimize the number of sensors and efficiently place them in the office. The controller was designed based on simple integral controllers. The objective of developed control algorithm was to improve the illuminance situation in the office through controlling the daylight and electrical lighting. To evaluate the performance of the system, the controller was applied on experimental office model in Lee et al.'s research study in 1998. The result of the developed control approach indicate a significantly improvement in lighting situation and 1-23% and 50-78% monthly electrical energy savings in the office model, compared to two static strategies when the blinds were left open and closed during the whole year respectively.

  8. A multi-layer robust adaptive fault tolerant control system for high performance aircraft

    NASA Astrophysics Data System (ADS)

    Huo, Ying

    Modern high-performance aircraft demand advanced fault-tolerant flight control strategies. Not only the control effector failures, but the aerodynamic type failures like wing-body damages often result in substantially deteriorate performance because of low available redundancy. As a result the remaining control actuators may yield substantially lower maneuvering capabilities which do not authorize the accomplishment of the air-craft's original specified mission. The problem is to solve the control reconfiguration on available control redundancies when the mission modification is urged to save the aircraft. The proposed robust adaptive fault-tolerant control (RAFTC) system consists of a multi-layer reconfigurable flight controller architecture. It contains three layers accounting for different types and levels of failures including sensor, actuator, and fuselage damages. In case of the nominal operation with possible minor failure(s) a standard adaptive controller stands to achieve the control allocation. This is referred to as the first layer, the controller layer. The performance adjustment is accounted for in the second layer, the reference layer, whose role is to adjust the reference model in the controller design with a degraded transit performance. The upmost mission adjust is in the third layer, the mission layer, when the original mission is not feasible with greatly restricted control capabilities. The modified mission is achieved through the optimization of the command signal which guarantees the boundedness of the closed-loop signals. The main distinguishing feature of this layer is the the mission decision property based on the current available resources. The contribution of the research is the multi-layer fault-tolerant architecture that can address the complete failure scenarios and their accommodations in realities. Moreover, the emphasis is on the mission design capabilities which may guarantee the stability of the aircraft with restricted post

  9. An experimental study of concurrent methods for adaptively controlling vertical tail buffet in high performance aircraft

    NASA Astrophysics Data System (ADS)

    Roberts, Patrick J.

    High performance twin-tail aircraft, like the F-15 and F/A-18, encounter a condition known as tail buffet. At high angles of attack, vortices are generated at the wing fuselage interface (shoulder) or other leading edge extensions. These vortices are directed toward the twin vertical tails. When the flow interacts with the vertical tail it creates pressure variations that can oscillate the vertical tail assembly. This results in fatigue cracks in the vertical tail assembly that can decrease the fatigue life and increase maintenance costs. Recently, an offset piezoceramic stack actuator was used on an F-15 wind tunnel model to control buffet induced vibrations at high angles of attack. The controller was based on the acceleration feedback control methods, In this thesis a procedure for designing the offset piezoceramic stack actuators is developed. This design procedure includes determining the quantity and type of piezoceramic stacks used in these actuators. The changes of stresses, in the vertical tail caused by these actuators during an active control, are investigated. In many cases, linear controllers are very effective in reducing vibrations. However, during flight, the natural frequencies of the vertical tail structural system changes as the airspeed increases. This in turn, reduces the effectiveness of a linear controller. Other causes such as the unmodeled dynamics and nonlinear effects due to debonds also reduce the effectiveness of linear controllers. In this thesis, an adaptive neural network is used to augment the linear controller to correct these effects.

  10. Design and Performance Optimization of GeoFEST for Adaptive Geophysical Modeling on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Norton, C. D.; Parker, J. W.; Lyzenga, G. A.; Glasscoe, M. T.; Donnellan, A.

    2006-12-01

    The Geophysical Finite Element Simulation Tool (GeoFEST) and the PYRAMID parallel adaptive mesh refinement library have been integrated to provide high performance and high resolution modeling of 3D Earth crustal deformation under tectonic loading associated with the Earthquake cycle. This includes co-seismic and post-seismic modeling capabilities as well as other problems of geophysical interest. The use of the PYRAMID AMR library has allowed simulations of tens of millions of elements on various parallel computers where strain energy is applied as the error estimation criterion. This has allowed for improved generation of time-dependent simulations where the computational effort can be localized to geophysical regions of most activity. This talk will address techniques including conversion of the sequential GeoFEST software to a parallel version using PYRAMID, performance optimization and various lessons learned as part of porting such software to various parallel systems including Linux Clusters, SGI Altix systems, and Apple G5 XServe systems. We will also describe how the software has been applied in modeling of post-seismic deformation studies of the Landers and Northridge earthquake events.

  11. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  12. High-Performance Reactive Fluid Flow Simulations Using Adaptive Mesh Refinement on Thousands of Processors

    NASA Astrophysics Data System (ADS)

    Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.

    We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.

  13. Real-Time Adaptive Control Allocation Applied to a High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Lallman, Frederick J.; Bundick, W. Thomas

    2001-01-01

    Abstract This paper presents the development and application of one approach to the control of aircraft with large numbers of control effectors. This approach, referred to as real-time adaptive control allocation, combines a nonlinear method for control allocation with actuator failure detection and isolation. The control allocator maps moment (or angular acceleration) commands into physical control effector commands as functions of individual control effectiveness and availability. The actuator failure detection and isolation algorithm is a model-based approach that uses models of the actuators to predict actuator behavior and an adaptive decision threshold to achieve acceptable false alarm/missed detection rates. This integrated approach provides control reconfiguration when an aircraft is subjected to actuator failure, thereby improving maneuverability and survivability of the degraded aircraft. This method is demonstrated on a next generation military aircraft Lockheed-Martin Innovative Control Effector) simulation that has been modified to include a novel nonlinear fluid flow control control effector based on passive porosity. Desktop and real-time piloted simulation results demonstrate the performance of this integrated adaptive control allocation approach.

  14. High-performance brain-machine interface enabled by an adaptive optimal feedback-controlled point process decoder.

    PubMed

    Shanechi, Maryam M; Orsborn, Amy; Moorman, Helene; Gowda, Suraj; Carmena, Jose M

    2014-01-01

    Brain-machine interface (BMI) performance has been improved using Kalman filters (KF) combined with closed-loop decoder adaptation (CLDA). CLDA fits the decoder parameters during closed-loop BMI operation based on the neural activity and inferred user velocity intention. These advances have resulted in the recent ReFIT-KF and SmoothBatch-KF decoders. Here we demonstrate high-performance and robust BMI control using a novel closed-loop BMI architecture termed adaptive optimal feedback-controlled (OFC) point process filter (PPF). Adaptive OFC-PPF allows subjects to issue neural commands and receive feedback with every spike event and hence at a faster rate than the KF. Moreover, it adapts the decoder parameters with every spike event in contrast to current CLDA techniques that do so on the time-scale of minutes. Finally, unlike current methods that rotate the decoded velocity vector, adaptive OFC-PPF constructs an infinite-horizon OFC model of the brain to infer velocity intention during adaptation. Preliminary data collected in a monkey suggests that adaptive OFC-PPF improves BMI control. OFC-PPF outperformed SmoothBatch-KF in a self-paced center-out movement task with 8 targets. This improvement was due to both the PPF's increased rate of control and feedback compared with the KF, and to the OFC model suggesting that the OFC better approximates the user's strategy. Also, the spike-by-spike adaptation resulted in faster performance convergence compared to current techniques. Thus adaptive OFC-PPF enabled proficient BMI control in this monkey. PMID:25571483

  15. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  16. Adaptation of the anelastic solver EULAG to high performance computing architectures.

    NASA Astrophysics Data System (ADS)

    Wójcik, Damian; Ciżnicki, Miłosz; Kopta, Piotr; Kulczewski, Michał; Kurowski, Krzysztof; Piotrowski, Zbigniew; Rojek, Krzysztof; Rosa, Bogdan; Szustak, Łukasz; Wyrzykowski, Roman

    2014-05-01

    In recent years there has been widespread interest in employing heterogeneous and hybrid supercomputing architectures for geophysical research. Especially promising application for the modern supercomputing architectures is the numerical weather prediction (NWP). Adopting traditional NWP codes to the new machines based on multi- and many-core processors, such as GPUs allows to increase computational efficiency and decrease energy consumption. This offers unique opportunity to develop simulations with finer grid resolutions and computational domains larger than ever before. Further, it enables to extend the range of scales represented in the model so that the accuracy of representation of the simulated atmospheric processes can be improved. Consequently, it allows to improve quality of weather forecasts. Coalition of Polish scientific institutions launched a project aimed at adopting EULAG fluid solver for future high-performance computing platforms. EULAG is currently being implemented as a new dynamical core of COSMO Consortium weather prediction framework. The solver code combines features of a stencil and point wise computations. Its communication scheme consists of both halo exchange subroutines and global reduction functions. Within the project, two main modules of EULAG, namely MPDATA advection and iterative GCR elliptic solver are analyzed and optimized. Relevant techniques have been chosen and applied to accelerate code execution on modern HPC architectures: stencil decomposition, block decomposition (with weighting analysis between computation and communication), reduction of inter-cache communication by partitioning of cores into independent teams, cache reusing and vectorization. Experiments with matching computational domain topology to cluster topology are performed as well. The parallel formulation was extended from pure MPI to hybrid MPI - OpenMP approach. Porting to GPU using CUDA directives is in progress. Preliminary results of performance of the

  17. Low-cost high performance adaptive optics real-time controller in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Chen, Shanqiu; Liu, Chao; Zhao, Enyi; Xian, Hao; Xu, Bing; Ye, Yutang

    2014-11-01

    This paper proposed a low-cost and high performance adaptive optics real-time controller in free space optical communication system. Real-time controller is constructed with a 4-core CPU with Linux operation system patched with Real-Time Application Interface (RTAI) and a frame-grabber, and the whole cost is below $6000. Multi-core parallel processing scheme and SSE instruction optimization for reconstruction process result in about 5 speedup, and overall processing time for this 137-element adaptive optic system can reach below 100 us and with latency about 50 us by utilizing streamlined processing scheme, which meet the requirement of processing at frequency over 1709 Hz. Real-time data storage system designed by circle buffer make this system to store consecutive image frames and provide an approach to analysis the image data and intermediate data such as slope information.

  18. High-performance oscillators employing adaptive optics comprised of discrete elements

    NASA Astrophysics Data System (ADS)

    Jackel, Steven M.; Moshe, Inon; Lavi, Raphael

    1999-05-01

    Flashlamp pumped oscillators utilizing Nd:Cr:GSGG or Nd:YAG rods were stabilized against varying levels of thermal focusing by use of a Variable Radius Mirror (VRM). In its simplest form, the VRM consisted of a lens followed by a concave mirror. Separation of the two elements controlled the radius of curvature of the reflected phase front. Addition of a concave-convex variable-separation cylindrical lens pair, allowed astigmatism to be corrected. These distributed optical elements together with a computer controlled servo system formed an adaptive optic capable of correcting the varying thermal focusing and astigmatism encountered in a Nd:YAG confocal unstable resonator (0 - 30 W) and in Nd:Cr:GSGG stable (hemispherical or concave- convex) resonators so that high beam quality could be maintained over the entire operating range. By utilizing resonators designed to eliminate birefringence losses, high efficiency could also be maintained. The ability to eliminate thermally induced losses in GSGG allows operating power to be increased into the range where thermal fracture is a factor. We present some results on the effect of surface finish (fine grind, grooves, chemical etch strengthening) on fracture limit and high gain operation.

  19. Lockheed L-1011 Test Station on-board in support of the Adaptive Performance Optimization flight res

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This console and its compliment of computers, monitors and commmunications equipment make up the Research Engineering Test Station, the nerve center for a new aerodynamics experiment being conducted by NASA's Dryden Flight Research Center, Edwards, California. The equipment is installed on a modified Lockheed L-1011 Tristar jetliner operated by Orbital Sciences Corp., of Dulles, Va., for Dryden's Adaptive Performance Optimization project. The experiment seeks to improve the efficiency of long-range jetliners by using small movements of the ailerons to improve the aerodynamics of the wing at cruise conditions. About a dozen research flights in the Adaptive Performance Optimization project are planned over the next two to three years. Improving the aerodynamic efficiency should result in equivalent reductions in fuel usage and costs for airlines operating large, wide-bodied jetliners.

  20. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  1. DSP-based adaptive backstepping using the tracking errors for high-performance sensorless speed control of induction motor drive.

    PubMed

    Zaafouri, Abderrahmen; Ben Regaya, Chiheb; Ben Azza, Hechmi; Châari, Abdelkader

    2016-01-01

    This paper presents a modified structure of the backstepping nonlinear control of the induction motor (IM) fitted with an adaptive backstepping speed observer. The control design is based on the backstepping technique complemented by the introduction of integral tracking errors action to improve its robustness. Unlike other research performed on backstepping control with integral action, the control law developed in this paper does not propose the increase of the number of system state so as not increase the complexity of differential equations resolution. The digital simulation and experimental results show the effectiveness of the proposed control compared to the conventional PI control. The results analysis shows the characteristic robustness of the adaptive control to disturbances of the load, the speed variation and low speed. PMID:26653141

  2. Design of high-performance adaptive objective lens with large optical depth scanning range for ultrabroad near infrared microscopic imaging

    PubMed Central

    Lan, Gongpu; Mauger, Thomas F.; Li, Guoqiang

    2015-01-01

    We report on the theory and design of adaptive objective lens for ultra broadband near infrared light imaging with large dynamic optical depth scanning range by using an embedded tunable lens, which can find wide applications in deep tissue biomedical imaging systems, such as confocal microscope, optical coherence tomography (OCT), two-photon microscopy, etc., both in vivo and ex vivo. This design is based on, but not limited to, a home-made prototype of liquid-filled membrane lens with a clear aperture of 8mm and the thickness of 2.55mm ~3.18mm. It is beneficial to have an adaptive objective lens which allows an extended depth scanning range larger than the focal length zoom range, since this will keep the magnification of the whole system, numerical aperture (NA), field of view (FOV), and resolution more consistent. To achieve this goal, a systematic theory is presented, for the first time to our acknowledgment, by inserting the varifocal lens in between a front and a back solid lens group. The designed objective has a compact size (10mm-diameter and 15mm-length), ultrabroad working bandwidth (760nm - 920nm), a large depth scanning range (7.36mm in air) — 1.533 times of focal length zoom range (4.8mm in air), and a FOV around 1mm × 1mm. Diffraction-limited performance can be achieved within this ultrabroad bandwidth through all the scanning depth (the resolution is 2.22 μm - 2.81 μm, calculated at the wavelength of 800nm with the NA of 0.214 - 0.171). The chromatic focal shift value is within the depth of focus (field). The chromatic difference in distortion is nearly zero and the maximum distortion is less than 0.05%. PMID:26417508

  3. A High Performance, Cost-Effective, Open-Source Microscope for Scanning Two-Photon Microscopy that Is Modular and Readily Adaptable

    PubMed Central

    Rosenegger, David G.; Tran, Cam Ha T.; LeDue, Jeffery; Zhou, Ning; Gordon, Grant R.

    2014-01-01

    Two-photon laser scanning microscopy has revolutionized the ability to delineate cellular and physiological function in acutely isolated tissue and in vivo. However, there exist barriers for many laboratories to acquire two-photon microscopes. Additionally, if owned, typical systems are difficult to modify to rapidly evolving methodologies. A potential solution to these problems is to enable scientists to build their own high-performance and adaptable system by overcoming a resource insufficiency. Here we present a detailed hardware resource and protocol for building an upright, highly modular and adaptable two-photon laser scanning fluorescence microscope that can be used for in vitro or in vivo applications. The microscope is comprised of high-end componentry on a skeleton of off-the-shelf compatible opto-mechanical parts. The dedicated design enabled imaging depths close to 1 mm into mouse brain tissue and a signal-to-noise ratio that exceeded all commercial two-photon systems tested. In addition to a detailed parts list, instructions for assembly, testing and troubleshooting, our plan includes complete three dimensional computer models that greatly reduce the knowledge base required for the non-expert user. This open-source resource lowers barriers in order to equip more laboratories with high-performance two-photon imaging and to help progress our understanding of the cellular and physiological function of living systems. PMID:25333934

  4. Wind shear measuring on board an airliner

    NASA Technical Reports Server (NTRS)

    Krauspe, P.

    1984-01-01

    A measurement technique which continuously determines the wind vector on board an airliner during takeoff and landing is introduced. Its implementation is intended to deliver sufficient statistical background concerning low frequency wind changes in the atmospheric boundary layer and extended knowledge about deterministic wind shear modeling. The wind measurement scheme is described and the adaptation of apparatus onboard an A300 airbus is shown. Preliminary measurements made during level flight demonstrate the validity of the method.

  5. New On-board Microprocessors

    NASA Astrophysics Data System (ADS)

    Weigand, R.

    Two new processor devices have been developed for the use on board of spacecrafts. An 8-bit 8032-microcontroller targets typical controlling applications in instruments and sub-systems, or could be used as a main processor on small satellites, whereas the LEON 32-bit SPARC processor can be used for high performance controlling and data processing tasks. The ADV80S32 is fully compliant to the Intel 80x1 architecture and instruction set, extended by additional peripherals, 512 bytes on-chip RAM and a bootstrap PROM, which allows downloading the application software using the CCSDS PacketWire pro- tocol. The memory controller provides a de-multiplexed address/data bus, and allows to access up to 16 MB data and 8 MB program RAM. The peripherals have been de- signed for the specific needs of a spacecraft, such as serial interfaces compatible to RS232, PacketWire and TTC-B-01, counters/timers for extended duration and a CRC calculation unit accelerating the CCSDS TM/TC protocol. The 0.5 um Atmel manu- facturing technology (MG2RT) provides latch-up and total dose immunity; SEU fault immunity is implemented by using SEU hardened Flip-Flops and EDAC protection of internal and external memories. The maximum clock frequency of 20 MHz allows a processing power of 3 MIPS. Engineering samples are available. For SW develop- ment, various SW packages for the 8051 architecture are on the market. The LEON processor implements a 32-bit SPARC V8 architecture, including all the multiply and divide instructions, complemented by a floating-point unit (FPU). It includes several standard peripherals, such as timers/watchdog, interrupt controller, UARTs, parallel I/Os and a memory controller, allowing to use 8, 16 and 32 bit PROM, SRAM or memory mapped I/O. With on-chip separate instruction and data caches, almost one instruction per clock cycle can be reached in some applications. A 33-MHz 32-bit PCI master/target interface and a PCI arbiter allow operating the device in a plug-in card

  6. On-Board Chemical Propulsion Technology

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    On-board propulsion functions include orbit insertion, orbit maintenance, constellation maintenance, precision positioning, in-space maneuvering, de-orbiting, vehicle reaction control, planetary retro, and planetary descent/ascent. This paper discusses on-board chemical propulsion technology, including bipropellants, monopropellants, and micropropulsion. Bipropellant propulsion has focused on maximizing the performance of Earth storable propellants by using high-temperature, oxidation-resistant chamber materials. The performance of bipropellant systems can be increased further, by operating at elevated chamber pressures and/or using higher energy oxidizers. Both options present system level difficulties for spacecraft, however. Monopropellant research has focused on mixtures composed of an aqueous solution of hydroxl ammonium nitrate (HAN) and a fuel component. HAN-based monopropellants, unlike hydrazine, do not present a vapor hazard and do not require extraordinary procedures for storage, handling, and disposal. HAN-based monopropellants generically have higher densities and lower freezing points than the state-of-art hydrazine and can higher performance, depending on the formulation. High-performance HAN-based monopropellants, however, have aggressive, high-temperature combustion environments and require advances in catalyst materials or suitable non-catalytic ignition options. The objective of the micropropulsion technology area is to develop low-cost, high-utility propulsion systems for the range of miniature spacecraft and precision propulsion applications.

  7. The TWINS Instrument On Board Mars Insight Mission

    NASA Astrophysics Data System (ADS)

    Velasco, Tirso; Rodríguez-Manfredi, Jose A.

    2015-04-01

    The aim of this paper is to present the TWINS (Temperature and Wind sensors for INSight mission) instrument developed for the JPL Mars Insight Mission, to be launched by JPL in 2016. TWINS will provide high performance wind and and air temperature measurements for the mission platform TWINS is based on the heritage from REMS (Rover Environmental Monitoring Station) on board Curiosity Rover, which has been working successfully on Mars surface since August 2012. The REMS Booms Spare Hardware, comprising the Wind and Temperature Sensors, have been refurbished into TWINS Booms, with enhanced performances in terms of dynamic range and resolution. Its short-term development time and low cost, have shown the capability of REMS design and technologies developed for Curiosity to be adapted to a new mission and new scientific requirements, with increased performances. It is also an example of international cooperation in Planetary Missions that has been carried out in the frame of science instrments within Curiosity and InSight Missions.

  8. Tailored Assemblies of Rod-Coil Poly(3-hexylthiophene)-b-Polystyrene Diblock Copolymers: Adaptable Building Blocks for High-Performance Organic Field-Effect Transistors

    SciTech Connect

    Xiao, Kai; Yu, Xiang; Chen, Jihua; Lavrik, Nickolay V; Hong, Kunlun; Sumpter, Bobby; Geohegan, David B

    2011-01-01

    The self-assembly process and resulting structure of a series of conductive diblock copolymer thin films of Poly(3-hexylthiophene)-b-Polystyrene (P3HT-b-PS) have been studied by TEM, SAED, GIXD and AFM and additionally by first principles modeling and simulation. By varying the molecular weight of the P3HT segment, these block copolymers undergo microphase separation and self-assemble into nanostructured sphere, lamellae, nanofiber, and nanoribbon in the films. Within the diblock copolymer thin film, the convalently bonded PS blocks segregated to form amorphous domains, however, the conductive P3HT blocks were crystalline, exhibiting highly-ordered molecular packing with their alkyl side chains aligned along to the normal to the substrate and the - stacking direction of the thiophene rings aligned parallel to the substrate. The conductive P3HY block copolymers exhibited significant improvements in organic feild-effect transistor (OFET) performance and environmental stability as compared to P3HT homopolymers, with up to a factor of two increase in measured moblity (0.08 cm2/Vs ) for the P4 (85 wt% P3HT). Overall, this work demonstrates that the high degree of molecular order induced by bock copolymer phase separation can improve the transport properties and stability of conductive polymer critical for high-performance OFET s.

  9. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  10. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  11. On-Board Mining in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans

  12. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  13. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  14. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  15. HypsIRI On-Board Science Data Processing

    NASA Technical Reports Server (NTRS)

    Flatley, Tom

    2010-01-01

    Topics include On-board science data processing, on-board image processing, software upset mitigation, on-board data reduction, on-board 'VSWIR" products, HyspIRI demonstration testbed, and processor comparison.

  16. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  17. On-board Data Mining

    NASA Astrophysics Data System (ADS)

    Tanner, Steve; Stein, Cara; Graves, Sara J.

    Networks of remote sensors are becoming more common as technology improves and costs decline. In the past, a remote sensor was usually a device that collected data to be retrieved at a later time by some other mechanism. This collected data were usually processed well after the fact at a computer greatly removed from the in situ sensing location. This has begun to change as sensor technology, on-board processing, and network communication capabilities have increased and their prices have dropped. There has been an explosion in the number of sensors and sensing devices, not just around the world, but literally throughout the solar system. These sensors are not only becoming vastly more sophisticated, accurate, and detailed in the data they gather but they are also becoming cheaper, lighter, and smaller. At the same time, engineers have developed improved methods to embed computing systems, memory, storage, and communication capabilities into the platforms that host these sensors. Now, it is not unusual to see large networks of sensors working in cooperation with one another. Nor does it seem strange to see the autonomous operation of sensorbased systems, from space-based satellites to smart vacuum cleaners that keep our homes clean and robotic toys that help to entertain and educate our children. But access to sensor data and computing power is only part of the story. For all the power of these systems, there are still substantial limits to what they can accomplish. These include the well-known limits to current Artificial Intelligence capabilities and our limited ability to program the abstract concepts, goals, and improvisation needed for fully autonomous systems. But it also includes much more basic engineering problems such as lack of adequate power, communications bandwidth, and memory, as well as problems with the geolocation and real-time georeferencing required to integrate data from multiple sensors to be used together.

  18. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  19. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  20. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  1. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  2. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  3. Intelligent On-Board Processing in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.

    2005-12-01

    Most existing sensing systems are designed as passive, independent observers. They are rarely aware of the phenomena they observe, and are even less likely to be aware of what other sensors are observing within the same environment. Increasingly, intelligent processing of sensor data is taking place in real-time, using computing resources on-board the sensor or the platform itself. One can imagine a sensor network consisting of intelligent and autonomous space-borne, airborne, and ground-based sensors. These sensors will act independently of one another, yet each will be capable of both publishing and receiving sensor information, observations, and alerts among other sensors in the network. Furthermore, these sensors will be capable of acting upon this information, perhaps altering acquisition properties of their instruments, changing the location of their platform, or updating processing strategies for their own observations to provide responsive information or additional alerts. Such autonomous and intelligent sensor networking capabilities provide significant benefits for collections of heterogeneous sensors within any environment. They are crucial for multi-sensor observations and surveillance, where real-time communication with external components and users may be inhibited, and the environment may be hostile. In all environments, mission automation and communication capabilities among disparate sensors will enable quicker response to interesting, rare, or unexpected events. Additionally, an intelligent network of heterogeneous sensors provides the advantage that all of the sensors can benefit from the unique capabilities of each sensor in the network. The University of Alabama in Huntsville (UAH) is developing a unique approach to data processing, integration and mining through the use of the Adaptive On-Board Data Processing (AODP) framework. AODP is a key foundation technology for autonomous internetworking capabilities to support situational awareness by

  4. High-Performance Bipropellant Engine

    NASA Technical Reports Server (NTRS)

    Biaglow, James A.; Schneider, Steven J.

    1999-01-01

    TRW, under contract to the NASA Lewis Research Center, has successfully completed over 10 000 sec of testing of a rhenium thrust chamber manufactured via a new-generation powder metallurgy. High performance was achieved for two different propellants, N2O4- N2H4 and N2O4 -MMH. TRW conducted 44 tests with N2O4-N2H4, accumulating 5230 sec of operating time with maximum burn times of 600 sec and a specific impulse Isp of 333 sec. Seventeen tests were conducted with N2O4-MMH for an additional 4789 sec and a maximum Isp of 324 sec, with a maximum firing duration of 700 sec. Together, the 61 tests totalled 10 019 sec of operating time, with the chamber remaining in excellent condition. Of these tests, 11 lasted 600 to 700 sec. The performance of radiation-cooled rocket engines is limited by their operating temperature. For the past two to three decades, the majority of radiation-cooled rockets were composed of a high-temperature niobium alloy (C103) with a disilicide oxide coating (R512) for oxidation resistance. The R512 coating practically limits the operating temperature to 1370 C. For the Earth-storable bipropellants commonly used in satellite and spacecraft propulsion systems, a significant amount of fuel film cooling is needed. The large film-cooling requirement extracts a large penalty in performance from incomplete mixing and combustion. A material system with a higher temperature capability has been matured to the point where engines are being readied for flight, particularly the 100-lb-thrust class engine. This system has powder rhenium (Re) as a substrate material with an iridium (Ir) oxidation-resistant coating. Again, the operating temperature is limited by the coating; however, Ir is capable of long-life operation at 2200 C. For Earth-storable bipropellants, this allows for the virtual elimination of fuel film cooling (some film cooling is used for thermal control of the head end). This has resulted in significant increases in specific impulse performance

  5. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  6. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  7. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  8. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  9. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    SciTech Connect

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  10. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  11. On-board image compression for the RAE lunar mission

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  12. Optimization of Planck-LFI on-board data handling

    NASA Astrophysics Data System (ADS)

    Maris, M.; Tomasi, M.; Galeotta, S.; Miccolis, M.; Hildebrandt, S.; Frailis, M.; Rohlfs, R.; Morisset, N.; Zacchei, A.; Bersanelli, M.; Binko, P.; Burigana, C.; Butler, R. C.; Cuttaia, F.; Chulani, H.; D'Arcangelo, O.; Fogliani, S.; Franceschi, E.; Gasparo, F.; Gomez, F.; Gregorio, A.; Herreros, J. M.; Leonardi, R.; Leutenegger, P.; Maggio, G.; Maino, D.; Malaspina, M.; Mandolesi, N.; Manzato, P.; Meharga, M.; Meinhold, P.; Mennella, A.; Pasian, F.; Perrotta, F.; Rebolo, R.; Türler, M.; Zonca, A.

    2009-12-01

    To asses stability against 1/f noise, the Low Frequency Instrument (LFI) on-board the Planck mission will acquire data at a rate much higher than the data rate allowed by the science telemetry bandwith of 35.5 Kbps. The data are processed by an on-board pipeline, followed on-ground by a decoding and reconstruction step, to reduce the volume of data to a level compatible with the bandwidth while minimizing the loss of information. This paper illustrates the on-board processing of the scientific data used by Planck/LFI to fit the allowed data-rate, an intrinsecally lossy process which distorts the signal in a manner which depends on a set of five free parameters (Naver, r1, r2, q, Script O) for each of the 44 LFI detectors. The paper quantifies the level of distortion introduced by the on-board processing as a function of these parameters. It describes the method of tuning the on-board processing chain to cope with the limited bandwidth while keeping to a minimum the signal distortion. Tuning is sensitive to the statistics of the signal and has to be constantly adapted during flight. The tuning procedure is based on a optimization algorithm applied to unprocessed and uncompressed raw data provided either by simulations, pre-launch tests or data taken in flight from LFI operating in a special diagnostic acquisition mode. All the needed optimization steps are performed by an automated tool, OCA2, which simulates the on-board processing, explores the space of possible combinations of parameters, and produces a set of statistical indicators, among them: the compression rate Cr and the processing noise epsilonQ. For Planck/LFI it is required that Cr = 2.4 while, as for other systematics, epsilonQ would have to be less than 10% of rms of the instrumental white noise. An analytical model is developed that is able to extract most of the relevant information on the processing errors and the compression rate as a function of the signal statistics and the processing parameters

  13. 47 CFR 80.1179 - On-board repeater limitations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false On-board repeater limitations. 80.1179 Section... SERVICES STATIONS IN THE MARITIME SERVICES Voluntary Radio Installations On-Board Communications § 80.1179 On-board repeater limitations. When an on-board repeater is used, the following limitations must...

  14. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  15. On-Board Propulsion System Analysis of High Density Propellants

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    1998-01-01

    The impact of the performance and density of on-board propellants on science payload mass of Discovery Program class missions is evaluated. A propulsion system dry mass model, anchored on flight-weight system data from the Near Earth Asteroid Rendezvous mission is used. This model is used to evaluate the performance of liquid oxygen, hydrogen peroxide, hydroxylammonium nitrate, and oxygen difluoride oxidizers with hydrocarbon and metal hydride fuels. Results for the propellants evaluated indicate that the state-of-art, Earth Storable propellants with high performance rhenium engine technology in both the axial and attitude control systems has performance capabilities that can only be exceeded by liquid oxygen/hydrazine, liquid oxygen/diborane and oxygen difluoride/diborane propellant combinations. Potentially lower ground operations costs is the incentive for working with nontoxic propellant combinations.

  16. On-board attitude determination for the Explorer Platform satellite

    NASA Technical Reports Server (NTRS)

    Jayaraman, C.; Class, B.

    1992-01-01

    This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.

  17. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  18. The Gas Imaging Spectrometer on Board ASCA

    NASA Astrophysics Data System (ADS)

    Ohashi, Takaya; Ebisawa, Ken; Fukazawa, Yasushi; Hiyoshi, Kenji; Horii, Michihiro; Ikebe, Yasushi; Ikeda, Hitoshi; Inoue, Hajime; Ishida, Manabu; Ishisaki, Yoshitaka; Ishizuka, Toshio; Kamijo, Shunsuke; Kaneda, Hidehiro; Kohmura, Yoshiki; Makishima, Kazuo; Mihara, Tatehiro; Tashiro, Makoto; Murakami, Toshio; Shoumura, Riichirou; Tanaka, Yasuo; Ueda, Yoshihiro; Taguchi, Koji; Tsuru, Takeshi; Takeshima, Toshiaki

    1996-04-01

    The Gas Imaging Spectrometer (GIS) system on board ASCA is described. The experiment consists of 2 units of imaging gas scintillation proportional counters with a sealed-off gas cell equipped with an imaging phototube. The performance is characterized by the high X-ray sensitivity (from 0.7 keV to over 10 keV), good energy resolution (7.8% FWHM at 6 keV following E(-0.5) as a function of X- ray energy E), moderate position resolution (0.5 mm FWHM at 6 keV with E(-0.5) dependence), fast time resolution down to 61 mu s, and an effective area of 50 mm diameter . The on-board signal processing system and the data transmitted to the ground are also described. The background rejection efficiency of the GIS is reaching the level achieved by the non-imaging multi-c ell proportional counters.

  19. BASKET on-board software library

    NASA Astrophysics Data System (ADS)

    Luntzer, Armin; Ottensamer, Roland; Kerschbaum, Franz

    2014-07-01

    The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing

  20. On-Board Training for US Payloads

    NASA Technical Reports Server (NTRS)

    Murphy, Benjamin; Meacham, Steven (Technical Monitor)

    2001-01-01

    The International Space Station (ISS) crew follows a training rotation schedule that puts them in the United States about every three months for a three-month training window. While in the US, the crew receives training on both ISS systems and payloads. Crew time is limited, and system training takes priority over payload training. For most flights, there is sufficient time to train all systems and payloads. As more payloads are flown, training time becomes a more precious resource. Less training time requires payload developers (PDs) to develop alternatives to traditional ground training. To ensure their payloads have sufficient training to achieve their scientific goals, some PDs have developed on-board trainers (OBTs). These OBTs are used to train the crew when no or limited ground time is available. These lessons are also available on-orbit to refresh the crew about their ground training, if it was available. There are many types of OBT media, such as on-board computer based training (OCBT), video/photo lessons, or hardware simulators. The On-Board Training Working Group (OBTWG) and Courseware Development Working Group (CDWG) are responsible for developing the requirements for the different types of media.

  1. The Advanced On-board Processor (AOP)

    NASA Technical Reports Server (NTRS)

    Hartenstein, R. G.; Trevathan, C. E.; Stewart, W. N.

    1971-01-01

    The goal of the Advanced On-Board Processor (AOP) development program is to design, build, and flight qualify a highly reliable, moderately priced, digital computer for application on a variety of spacecraft. Included in this development program is the preparation of a complete support software package which consists of an assembler, simulator, loader, system diagnostic, operational executive, and many useful subroutines. The AOP hardware/software system is an extension of the On-Board Processor (OBP) which was developed for general purpose use on earth orbiting spacecraft with its initial application being on-board the fourth Orbiting Astronomical Observatory (OAO-C). Although the OBP possesses the significant features that are required for space application, however, when operating at 100% duty cycle the OBP is too power-consuming for use on many smaller spacecraft. Computer volume will be minimized by implementing the processor and input/output portions of the machine with large scale integrated circuits. Power consumption will be reduced through the use of plated wire and, in some cases, semiconductor memory elements.

  2. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  3. First light of SWAP on-board PROBA2

    NASA Astrophysics Data System (ADS)

    Halain, Jean-Philippe; Berghmans, David; Defise, Jean-Marc; Renotte, Etienne; Thibert, Tanguy; Mazy, Emmanuel; Rochus, Pierre; Nicula, Bogdan; de Groof, Anik; Seaton, Dan; Schühle, Udo

    2010-07-01

    The SWAP telescope (Sun Watcher using Active Pixel System detector and Image Processing) is an instrument launched on 2nd November 2009 on-board the ESA PROBA2 technological mission. SWAP is a space weather sentinel from a low Earth orbit, providing images at 174 nm of the solar corona. The instrument concept has been adapted to the PROBA2 mini-satellite requirements (compactness, low power electronics and a-thermal opto-mechanical system). It also takes advantage of the platform pointing agility, on-board processor, Packetwire interface and autonomous operations. The key component of SWAP is a radiation resistant CMOS-APS detector combined with onboard compression and data prioritization. SWAP has been developed and qualified at the Centre Spatial de Liège (CSL) and calibrated at the PTBBessy facility. After launch, SWAP has provided its first images on 14th November 2009 and started its nominal, scientific phase in February 2010, after 3 months of platform and payload commissioning. This paper summarizes the latest SWAP developments and qualifications, and presents the first light results.

  4. Towards a demonstrator for autonomous object detection on board Gaia

    NASA Astrophysics Data System (ADS)

    Mignot, Shan

    2010-07-01

    ESA's cornerstone mission Gaia aims at autonomously building a billion-star catalogue by detecting them on board. The scientific and technical requirements make this an engineering challenge. We have devised a prototype to assess achievable performances and assist in sizing the on-board electronics. It is based on a sequence of four tasks: calibrating the CCD data, estimating the sky background, identifying the objects and, finally, characterising them. Although inspired by previous similar studies (APM, Sextractor), this approach has been thoroughly revisited and finely adapted to Gaia. A mixed implementation is proposed which deals with the important data flow and the hard real-time constraints in hardware (FPGA) and entrusts more complex or variable processing to software. This segmentation also corresponds to subdividing the previous operations in pixel-based and object-based domains. Our hardware and software demonstrators show that the scientific specifications can be met, as regards completeness, precision and robustness while, technically speaking, our pipeline, optimised for area and power consumption, allows for selecting target components. Gaia's prime contractor, inspired by these developments, has also elected a mixed architecture, so that our R&D has proven relevant for the forthcoming generation of satellites.

  5. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  6. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  7. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  8. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  9. 32 CFR 700.844 - Marriages on board.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Marriages on board. 700.844 Section 700.844... Commanding Officers Afloat § 700.844 Marriages on board. The commanding officer shall not perform a marriage ceremony on board his or her ship or aircraft. He or she shall not permit a marriage ceremony to...

  10. Flight experiences on board Space Station Mir

    NASA Astrophysics Data System (ADS)

    Viehboeck, Franz

    1992-07-01

    A survey of the training in the cosmonaut center 'Yuri Gagarin' near Moscow (U.S.S.R.) and of the preparation for the joint Soviet-Austrian space flight from 2-10 Oct. 1991 is given. The flight in Soyuz-TM 13 with the most important systems, as well as a short description of the Space Station Mir, the life on board the Station with the basic systems, like energy supply, life support, radio, and television are described. The possibilities of exploitation of the Space Station Mir and an outlook to the future is given.

  11. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  12. High-performance membrane chromatography.

    PubMed

    Belenkii, B G; Malt'sev, V G

    1995-02-01

    In gradient chromatography for proteins migrating along the chromatographic column, the critical distance X0 has been shown to exist at which the separation of zones is at a maximum and band spreading is at a minimum. With steep gradients and small elution velocity, the column length may be reduced to the level of membrane thickness--about one millimeter. The peculiarities of this novel separation method for proteins, high-performance membrane chromatography (HPMC), are discussed and stepwise elution is shown to be especially effective. HPMC combines the advantages of membrane technology and high-performance liquid chromatography, and avoids their drawbacks. PMID:7727132

  13. High Performance Photovoltaic Project Overview

    SciTech Connect

    Symko-Davies, M.; McConnell, R.

    2005-01-01

    The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

  14. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  15. Concepts for on board satellite image registration. Volume 4: Impact of data set selection on satellite on board signal processing

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Aanstoos, J. V.; Snyder, W. E.

    1982-01-01

    The NASA NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. This volume addresses the impact of data set selection on data formatting required for efficient telemetering of the acquired satellite sensor data. More specifically, the FILE algorithm developed by Martin-Marietta provides a means for the determination of those pixels from the data stream effects an improvement in the achievable system throughput. It will be seen that based on the lack of statistical stationarity in cloud cover, spatial distribution periods exist where data acquisition rates exceed the throughput capability. The study therefore addresses various approaches to data compression and truncation as applicable to this sensor mission.

  16. [Is there a physician on board?].

    PubMed

    Andersen, H T

    1998-09-30

    Physicians responding to emergency calls on board airliners in intercontinental traffic may not be aware of certain legal complications which may arise. For instance, the medical practitioner may hold a license valid in one country, the air carrier may be registered in another, and the patient may be a third state national. Legislation varies between nations, as do courts decisions. Physicians may not be aware of the laws and regulations which apply or of the subtle differences between terms and interpretations used in formal language. This article contains a scenario description from a commercial air liner in intercontinental transit carrying a patient unknown to the physician who responds to a call for medical assistance. The main considerations to be made, the more likely diagnoses and various strategies for immediate interventions are reviewed. Likewise, appraisal and use of medical equipment on board are discussed, as are issues concerning responsibility and liability when equipment is used in supposedly "trained hands". Main themes in the current international medico-legal debate are considered with emphasis on the "Good Samaritan Principle", the responsibility of commercial air carriers, and telemedicine with insurance against law suits. The article concludes with some practical advice to the travelling medical community. PMID:9820009

  17. [Is there a physician on board?].

    PubMed

    Andersen, H T

    1998-11-01

    Physicians responding to emergency calls on board airliners in intercontinental traffic may not be aware of certain legal complications which may arise. For instance, the medical practitioner may hold a license valid in one country, the air carrier may be registered in another, and the patient may be a third state national. Legislation varies between nations, as do court decisions. Physicians may be aware neither of the laws and regulations which apply nor the subtle differences between terms and interpretations used in formal language. This article contains a scenario description from a commercial air liner in intercontinental transit carrying a patient unknown to the physician who responds to a call for medical assistance. The main considerations to be made, the more likely diagnoses and various strategies for immediate interventions are reviewed. Likewise, appraisal and use of medical equipment on board are discussed, as are issues concerning responsibility and liability when equipment is used in supposedly "trained hands". Main themes in the current international medico-legal debate are considered with emphasis on the "Good Samaritan Principle", the responsibility of commercial air carriers, and telemedicine with insurance against law suits. The article concludes with some practical advice to the travelling medical community. PMID:9835766

  18. On-Board Entry Trajectory Planning Expanded to Sub-orbital Flight

    NASA Technical Reports Server (NTRS)

    Lu, Ping; Shen, Zuojun

    2003-01-01

    A methodology for on-board planning of sub-orbital entry trajectories is developed. The algorithm is able to generate in a time frame consistent with on-board environment a three-degree-of-freedom (3DOF) feasible entry trajectory, given the boundary conditions and vehicle modeling. This trajectory is then tracked by feedback guidance laws which issue guidance commands. The current trajectory planning algorithm complements the recently developed method for on-board 3DOF entry trajectory generation for orbital missions, and provides full-envelope autonomous adaptive entry guidance capability. The algorithm is validated and verified by extensive high fidelity simulations using a sub-orbital reusable launch vehicle model and difficult mission scenarios including failures and aborts.

  19. Panelized high performance multilayer insulation

    NASA Technical Reports Server (NTRS)

    Burkley, R. A.; Shriver, C. B.; Stuckey, J. M.

    1968-01-01

    Multilayer insulation coverings with low conductivity foam spacers are interleaved with quarter mil aluminized polymer film radiation shields to cover flight type liquid hydrogen tankage of space vehicles with a removable, structurally compatible, lightweight, high performance cryogenic insulation capable of surviving extended space mission environments.

  20. High performance rolling element bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Jr., Roger W. (Inventor); Olinger, Jr., John B. (Inventor); Owen, Samuel S. (Inventor); Poole, William E. (Inventor); Haluck, David A. (Inventor)

    1993-01-01

    A high performance rolling element bearing (5) which is particularly suitable for use in a cryogenically cooled environment, comprises a composite cage (45) formed from glass fibers disposed in a solid lubricant matrix of a fluorocarbon polymer. The cage includes inserts (50) formed from a mixture of a soft metal and a solid lubricant such as a fluorocarbon polymer.

  1. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  2. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system. PMID:15458092

  3. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  4. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  5. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  6. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  7. High-Performance Thermoelectric Semiconductors

    NASA Technical Reports Server (NTRS)

    Fleurial, Jean-Pierre; Caillat, Thierry; Borshchevsky, Alexander

    1994-01-01

    Figures of merit almost double current state-of-art thermoelectric materials. IrSb3 is semiconductor found to exhibit exceptional thermoelectric properties. CoSb3 and RhSb3 have same skutterudite crystallographic structure as IrSb3, and exhibit exceptional transport properties expected to contribute to high thermoelectric performance. These three compounds form solid solutions. Combination of properties offers potential for development of new high-performance thermoelectric materials for more efficient thermoelectric power generators, coolers, and detectors.

  8. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  9. Reference Architecture for High Dependability On-Board Computers

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Esper, Alexandre; Zandin, Johan; Barbosa, Ricardo; Monteleone, Claudio

    2014-08-01

    The industrial process in the area of on-board computers is characterized by small production series of on-board computers (hardware and software) configuration items with little recurrence at unit or set level (e.g. computer equipment unit, set of interconnected redundant units). These small production series result into a reduced amount of statistical data related to dependability, which influence on the way on-board computers are specified, designed and verified. In the context of ESA harmonization policy for the deployment of enhanced and homogeneous industrial processes in the area of avionics embedded systems and on-board computers for the space industry, this study aimed at rationalizing the initiation phase of the development or procurement of on-board computers and at improving dependability assurance. This aim was achieved by establishing generic requirements for the procurement or development of on-board computers with a focus on well-defined reliability, availability, and maintainability requirements, as well as a generic methodology for planning, predicting and assessing the dependability of on-board computers hardware and software throughout their life cycle. It also provides guidelines for producing evidence material and arguments to support dependability assurance of on-board computers hardware and software throughout the complete lifecycle, including an assessment of feasibility aspects of the dependability assurance process and how the use of computer-aided environment can contribute to the on-board computer dependability assurance.

  10. Toward High-Performance Organizations.

    ERIC Educational Resources Information Center

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  11. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  12. Vibration on board and health effects.

    PubMed

    Jensen, Anker; Jepsen, Jørgen Riis

    2014-01-01

    There is only limited knowledge of the exposure to vibrations of ships' crews and their risk of vibration-induced health effects. Exposure to hand-arm vibrations from the use of vibrating tools at sea does not differ from that in the land-based trades. However, in contrast to most other work places, seafarers are also exposed to vibrations to the feet when standing on vibrating surfaces on board. Anecdotal reports have related the development of "white feet" to local exposure to vibration, e.g. in mining, but this connection has not been investigated in the maritime setting. As known from studies of the health consequences of whole body vibrations in land-transportation, such exposure at sea may affect ships' passengers and crews. While the relation of back disorders to high levels of whole body vibration has been demonstrated among e.g. tractor drivers, there are no reported epidemiological evidence for such relation among seafarers except for fishermen, who, however, are also exposed to additional recognised physical risk factors at work. The assessment and reduction of vibrations by naval architects relates to technical implications of this impact for the ships' construction, but has limited value for the estimation of health risks because they express the vibration intensity differently that it is done in a medical context. PMID:25231326

  13. Summer School on Board an Arctic Icebreaker

    NASA Astrophysics Data System (ADS)

    Alexeev, Vladimir; Dmitrenko, Igor; Fortier, Louis; Repina, Irina; Mokhov, Igor

    2006-01-01

    It has been reported widely that the climate in the Arctic is changing rapidly, maybe faster there than anywhere else. In addition, northern sea ice is shrinking, especially in the coastal seas of the Russian Arctic, such as the Laptev Sea. Since 2002, the International Arctic Research Center (IARC), based at the University of Alaska Fairbanks, has been recording long-term oceanographic observations in this region through the Nansen and Amundsen Basins Observation System (NABOS) project. In 2005, the annual NABOS expedition was conducted in parallel with a summer school on board the icebreaker Kapitan Dranitsyn. This was the third IARC-supported summer school. Two previous summer schools were held in Fairbanks. A total of 24 university students and early career scientists had been chosen, out of about 140 summer school applicants: six from the United States, five from Russia, five from Canada, two from Norway, and one each from Belgium, Denmark, France, Germany, New Zealand, and Sweden. Vladimir Alexeev of IARC, the author of this meeting report, served as the director of the school; Louis Fortier of Laval University (Quebec City, Canada) was co-director.

  14. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  15. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  16. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  17. High performance storable propellant resistojet

    NASA Technical Reports Server (NTRS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  18. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  19. FPGA Based High Performance Computing

    SciTech Connect

    Bennett, Dave; Mason, Jeff; Sundararajan, Prasanna; Dellinger, Erik; Putnam, Andrew; Storaasli, Olaf O

    2008-01-01

    Current high performance computing (HPC) applications are found in many consumer, industrial and research fields. From web searches to auto crash simulations to weather predictions, these applications require large amounts of power by the compute farms and supercomputers required to run them. The demand for more and faster computation continues to increase along with an even sharper increase in the cost of the power required to operate and cool these installations. The ability of standard processor based systems to address these needs has declined in both speed of computation and in power consumption over the past few years. This paper presents a new method of computation based upon programmable logic as represented by Field Programmable Gate Arrays (FPGAs) that addresses these needs in a manner requiring only minimal changes to the current software design environment.

  20. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  1. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  2. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  3. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  4. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  5. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile, air-to-base... 47 Telecommunication 5 2011-10-01 2011-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a)...

  6. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... sections of this part with respect to operation on specific frequencies, mobile stations first...

  7. 40 CFR 86.1806-04 - On-board diagnostics.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (i) SAE J1850 “Class B Data Communication Network Interface,” (Revised, May 2001) shall be used as the on-board to off-board communications protocol. All emission related messages sent to the scan tool... J1850 as the on-board to off-board communications protocol. (ii) ISO 14230-4:2000(E) “Road...

  8. 14 CFR 91.702 - Persons on board.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Foreign Aircraft Operations and Operations of U.S.-Registered Civil Aircraft Outside of the United States; and Rules Governing Persons on Board Such Aircraft § 91.702 Persons on board. Section 91.11 of this part (Prohibitions on...

  9. 14 CFR 91.702 - Persons on board.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Foreign Aircraft Operations and Operations of U.S.-Registered Civil Aircraft Outside of the United States; and Rules Governing Persons on Board Such Aircraft § 91.702 Persons on board. Section 91.11 of this part (Prohibitions on...

  10. 14 CFR 91.702 - Persons on board.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Foreign Aircraft Operations and Operations of U.S.-Registered Civil Aircraft Outside of the United States; and Rules Governing Persons on Board Such Aircraft § 91.702 Persons on board. Section 91.11 of this part (Prohibitions on...

  11. 14 CFR 91.702 - Persons on board.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Foreign Aircraft Operations and Operations of U.S.-Registered Civil Aircraft Outside of the United States; and Rules Governing Persons on Board Such Aircraft § 91.702 Persons on board. Section 91.11 of this part (Prohibitions on...

  12. 14 CFR 91.702 - Persons on board.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Foreign Aircraft Operations and Operations of U.S.-Registered Civil Aircraft Outside of the United States; and Rules Governing Persons on Board Such Aircraft § 91.702 Persons on board. Section 91.11 of this part (Prohibitions on...

  13. 47 CFR 90.423 - Operation on board aircraft.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation on board aircraft. 90.423 Section 90... PRIVATE LAND MOBILE RADIO SERVICES Operating Requirements § 90.423 Operation on board aircraft. (a) Except... after September 14, 1973, under this part may be operated aboard aircraft for air-to-mobile,...

  14. File-Based Operations and CFDP On-Board Implementation

    NASA Astrophysics Data System (ADS)

    Herrera Alzu, Ignacio; Peran Mazon, Francisco; Gonzalo Palomo, Alfonso

    2014-08-01

    Since several years ago, there is an increasing interest among the space agencies, ESA in particular, in deploying File-based Operations (FbO) for Space missions. This aims at simplifying, from the Ground Segment's perspective, the access to the Space Segment and ultimately the overall operations. This is particularly important for deep Space missions, where the Ground-Space interaction can become too complex to handle just with traditional packet-based services. The use of a robust protocol for transferring files between Ground and Space is a key for the FbO approach, and the CCSDS File Delivery Protocol (CFDP) is nowadays the main candidate for doing this job. Both Ground and Space Segments need to be adapted for FbO, being the Ground Segment naturally closer to this concept. This paper focusses on the Space Segment. The main implications related to FbO/CFDP, the possible on-board implementations and the foreseen operations are described. The case of Euclid, the first ESA mission to be file-based operated with CFDP, is also analysed.

  15. On-board identification of tyre cornering stiffness using dual Kalman filter and GPS

    NASA Astrophysics Data System (ADS)

    Lee, Seungyong; Nakano, Kimihiko; Ohori, Masanori

    2015-04-01

    Cornering stiffness is one of the important vehicle parameters for steering control of a vehicle. Accurate vehicle parameters are essential for a high performance of vehicle control because vehicle control is significantly affected by variations in vehicle parameters. In this study, a novel identification method is proposed using a dual Kalman filter algorithm and a GPS (global positioning system) measurement system to estimate the cornering stiffness for on-board identification. Performance of the identification method is examined with experiments, and the estimation results show that this identification method is effective on both a flat road and a banked curve road.

  16. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  17. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  18. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  19. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  20. High Performance Solution Processable TFTs

    NASA Astrophysics Data System (ADS)

    Gundlach, David

    2008-03-01

    Organic-based electronic devices offer the potential to significantly impact the functionality and pervasiveness of large-area electronics. We report on soluble acene-based organic thin film transistors (OTFTs) where the microstructure of as-cast films can be precisely controlled via interfacial chemistry. Chemically tailoring the source/drain contact interface is a novel route to self-patterning of soluble small molecule organic semiconductors and enables the growth of highly ordered regions along opposing contact edges which extend into the transistor channel. The unique film forming properties of soluble fluorinated anthradithiophenes allows us to fabricate high performance OTFTs, OTFT circuits, and to deterministically study the influence of the film microstructure on the electrical characteristics of devices. Most recently we have grown single crystals of soluble fluorinated anthradithiophenes by vapor transport method allowing us to probe deeper into their intrinsic properties and determine the potential and limitations of this promising family of oligomers for use in organic-based electronic devices. Co-Authors: O. D. Jurchescu^1,4, B. H. Hamadani^1, S. K. Park^4, D. A. Mourey^4, S. Subramanian^5, A. J. Moad^2, R. J. Kline^3, L. C. Teague^2, J. G. Kushmerick^2, L. J. Richter^2, T. N. Jackson^4, and J. E. Anthony^5 ^1Semiconductor Electronics Division, ^2Surface and Microanalysis Science Division, ^3Polymers Division, National Institute of Standards and Technology, Gaithersburg, MD 20899 ^4Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802 ^5Department of Chemistry, University of Kentucky, Lexington, KY 40506-0055

  1. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  2. Development and Validation of the On-Board Control Procedures Subsystem for the Herschel and Planck Satellites

    NASA Astrophysics Data System (ADS)

    Ferraguto, M.; Wittrock, T.; Barrenscheen, M.; Paakko, M.; Sipinen, V.; Pelttari, L.

    2009-05-01

    The On-Board Control Procedures (OBCP) subsystem of Herschel and Planck Satellites' Central Data Management Unit (CDMU) Application SW (ASW) provides means to control the spacecraft through small script-like programs written in a specific language called On-board Command Language (OCL). The implementation for Herschel and Planck satellites is an adaptation from previous experiences on instruments like Rosetta/OSIRIS, Venus Express/VMC and Dawn/FC, but it had also been adapted successfully for the GOCE satellite already. A thorough validation campaign has been conducted to qualify the H&P SW implementation for flight. The purpose of having on-board control procedures is to allow the ground operators to be able to prepare and up-link complex operations sequences (more complex than simple sequences of mission time-line telecommands) to be executed on-board during the mission operational phase. This is possible because the OBCPs run in a quite separate subsystem, so the creation of a new procedure does not require modification, uplink and re-validation of the whole on-board software. The OBCP subsystem allows these control procedures to be developed, tested on ground, and executed on the spacecraft.

  3. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  4. High-Performance Schools Make Cents.

    ERIC Educational Resources Information Center

    Nielsen-Palacios, Christian

    2003-01-01

    Describes the educational benefits of high-performance schools, buildings that are efficient, healthy, safe, and easy to operate and maintain. Also briefly describes how to create a high-performance school drawn from volume I (Planning) of the three-volume Collaborative for High Performance Schools (CHPS) "Best Practices Manual." (For more…

  5. XMM instrument on-board software maintenance concept

    NASA Technical Reports Server (NTRS)

    Peccia, N.; Giannini, F.

    1994-01-01

    While the pre-launch responsibility for the production, validation and maintenance of instrument on-board software traditionally lies with the experimenter, the post-launch maintenance has been the subject of ad hoc arrangements with the responsibility shared to different extent between the experimenter, ESTEC and ESOC. This paper summarizes the overall design and development of the instruments on-board software for the XMM satellite, and describes the concept adopted for the maintenance of such software post-launch. The paper will also outline the on-board software maintenance and validation facilities and the expected advantages to be gained by the proposed strategy. Conclusions with respect to adequacy of this approach will be presented as well as recommendations for future instrument on-board software developments.

  6. On-board congestion control for satellite packet switching networks

    NASA Technical Reports Server (NTRS)

    Chu, Pong P.

    1991-01-01

    It is desirable to incorporate packet switching capability on-board for future communication satellites. Because of the statistical nature of packet communication, incoming traffic fluctuates and may cause congestion. Thus, it is necessary to incorporate a congestion control mechanism as part of the on-board processing to smooth and regulate the bursty traffic. Although there are extensive studies on congestion control for both baseband and broadband terrestrial networks, these schemes are not feasible for space based switching networks because of the unique characteristics of satellite link. Here, we propose a new congestion control method for on-board satellite packet switching. This scheme takes into consideration the long propagation delay in satellite link and takes advantage of the the satellite's broadcasting capability. It divides the control between the ground terminals and satellite, but distributes the primary responsibility to ground terminals and only requires minimal hardware resource on-board satellite.

  7. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  8. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  9. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  10. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  11. 20 CFR 341.7 - Liability on Board's claim.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... INSURANCE ACT STATUTORY LIEN WHERE SICKNESS BENEFITS PAID § 341.7 Liability on Board's claim. (a) A person or company paying any sum or damages to an employee who has received sickness benefits from the...

  12. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    The feasibility was investigated of an on-board earth resources data processor launched during the 1980-1990 time frame. Projected user applications were studied to define the data formats and the information extraction algorithms that the processor must execute. Based on these constraints, and the constraints imposed by the available technology, on-board processor systems were designed and their feasibility evaluated. Conclusions and recommendations are given.

  13. F-8 DFBW on-board electronics

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The Apollo hardware jammed into the F-8C. The computer is partially visible in the avionics bay at the top of the fuselage behind the cockpit. Note the display and keyboard unit in the gun bay. To carry the computers and other equipment, the F-8 DFBW team removed the aircraft's guns and ammunition boxes. The F-8 Digital Fly-By-Wire (DFBW) flight research project validated the principal concepts of all-electric flight control systems now used on nearly all modern high-performance aircraft and on military and civilian transports. The first flight of the 13-year project was on May 25, 1972, with research pilot Gary E. Krier at the controls of a modified F-8C Crusader that served as the testbed for the fly-by-wire technologies. The project was a joint effort between the NASA Flight Research Center, Edwards, California, (now the Dryden Flight Research Center) and Langley Research Center. It included a total of 211 flights. The last flight was December 16, 1985, with Dryden research pilot Ed Schneider at the controls. The F-8 DFBW system was the forerunner of current fly-by-wire systems used in the space shuttles and on today's military and civil aircraft to make them safer, more maneuverable, and more efficient. Electronic fly-by-wire systems replaced older hydraulic control systems, freeing designers to design aircraft with reduced in-flight stability. Fly-by-wire systems are safer because of their redundancies. They are more maneuverable because computers can command more frequent adjustments than a human pilot can. For airliners, computerized control ensures a smoother ride than a human pilot alone can provide. Digital-fly-by-wire is more efficient because it is lighter and takes up less space than the hydraulic systems it replaced. This either reduces the fuel required to fly or increases the number of passengers or pounds of cargo the aircraft can carry. Digital fly-by-wire is currently used in a variety of aircraft ranging from F/A-18 fighters to the Boeing 777

  14. High performance dosimetry calculations using adapted ray-tracing

    NASA Astrophysics Data System (ADS)

    Perrotte, Lancelot; Saupin, Guillaume

    2010-11-01

    When preparing interventions on nuclear sites, it is interesting to study different scenarios, to identify the most appropriate one for the operator(s). Using virtual reality tools is a good way to simulate the potential scenarios. Thus, taking advantage of very efficient computation times can help the user studying different complex scenarios, by immediately evaluating the impact of any changes. In the field of radiation protection, people often use computation codes based on the straight line attenuation method with build-up factors. As for other approaches, geometrical computations (finding all the interactions between radiation rays and the scene objects) remain the bottleneck of the simulation. We present in this paper several optimizations used to speed up these geometrical computations, using innovative GPU ray-tracing algorithms. For instance, we manage to compute every intersectionbetween 600 000 rays and a huge 3D industrial scene in a fraction of second. Moreover, our algorithm works the same way for both static and dynamic scenes, allowing easier study of complex intervention scenarios (where everything moves: the operator(s), the shielding objects, the radiation sources).

  15. Advanced Hybrid On-Board Science Data Processor - SpaceCube 2.0

    NASA Technical Reports Server (NTRS)

    Flatley, Tom

    2010-01-01

    Topics include an overview of On-board science data processing, software upset mitigation, on-board data reduction, on-board products, HyspIRI demonstration testbed, SpaceCube 2.0 block diagram, and processor comparison.

  16. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  17. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  18. Method of making a high performance ultracapacitor

    SciTech Connect

    Farahmandi, C.J.; Dispennette, J.M.

    2000-05-09

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  19. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  20. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  1. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

    2006-12-12

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

  2. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E; Pollard, Martin J; Elkin, Christopher J

    2005-10-11

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetizable molecular structures and targets. Also disclosed are: a method of assembling the hybrid magnetic plates, a high throughput protocol featuring the hybrid magnetic structure, and other embodiments of the ferromagnetic pole shape, attachment and adapter interfaces for adapting the use of the hybrid magnetic structure for use with liquid handling and other robots for use in high throughput processes.

  3. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  4. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  5. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  6. High Performance Work Practices and Firm Performance.

    ERIC Educational Resources Information Center

    Department of Labor, Washington, DC. Office of the American Workplace.

    A literature survey established that a substantial amount of research has been conducted on the relationship between productivity and the following specific high performance work practices: employee involvement in decision making, compensation linked to firm or worker performance, and training. According to these studies, high performance work…

  7. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  8. Initial Fault Tolerance and Autonomy Results for Autonomous On-board Processing of Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    French, M.; Walters, J.; Zick, K.

    2011-12-01

    By developing Radiation Hardening by Software (RHBSW) techniques leveraged from the High Performance Computing community, our work seeks to deliver radiation tolerant, high performance System on a Chip (SoC) processors to the remote sensing community. This SoC architecture is uniquely suited to both handle high performance signal processing tasks, as well as autonomous agent processing. This allows situational awareness to be developed in-situ, resulting in a 10-100x decrease in processing latency, which directly translates into more science experiments conducted per day and a more thorough, timely analysis of captured data. With the increase in the amount of computational throughput made possible by commodity high performance processors and low overhead fault tolerance, new applications can be considered for on-board processing. A high performance and low overhead fault tolerance strategy targeting scientific applications on the SpaceCube 1.0 platform has been enhanced with initial results showing an order of magnitude increase in Mean Time Between Data Error and a complete elimination of processor hangs. Initial study of representative Hyperspectral applications also proves promising due to high levels of data parallelism and fine grained parallelism achievable within FPGA System on a Chip architectures enabled by our RHBSW techniques. To demonstrate the kinds of capabilities these fault tolerance approaches yield, the team focused on applications representative of the Decadal Survey HyspIRI mission, which uses high throughput Thermal Infrared Scanner (132 Mbps) and Hyperspectral Visibe ShortWave InfraRed (804 Mbps) instruments, while having only a 15 Mbps downlink channel. This mission provides a great many use scenarios for onboard processing, from high compression algorithms, to pre-processing and selective download of high priority images, to full on-board classification. This paper focuses on recent efforts which revolve around developing a fault emulator

  9. Optical filters on board the Space Telescope Imaging Spectrograph (STIS)

    NASA Astrophysics Data System (ADS)

    Coffelt, Everett L.; Martella, Mark A.

    1996-11-01

    The space telescope imaging spectrograph (STIS) instrument is due to be installed on board the Hubble Space Telescope (HST) in 1997. STIS uses 20 filters located on a wheel that can rotate any one of 88 apertures or combination filter/aperture in to the beam path. The instrument incorporates a continuous range of spectral response from the VUV (115.0 nm) to 1 micrometer. Therefore, filters that perform in the VUV are discussed as well as filters that operate in the near infrared. Neutral density filters are also being used for on-board calibration from 300 nm to Lyman-Alpha (121.6 nm).

  10. Aircraft structural health monitoring using on-board BOCDA system

    NASA Astrophysics Data System (ADS)

    Yari, Takashi; Nagai, Kanehiro; Ishioka, Masahito; Hotate, Kazuo; Koshioka, Yasuhiro

    2008-03-01

    We developed the on-board BOCDA system for airplane and verified the flight environmental stability and durability through environmental test. The on-board BOCDA system adopted the polarization diversity technique and temporal gating technique to improve robustness of the BOCDA system. We successfully measured distribution of fiber Brillouin gain spectrum over 500m measurement range with 50mm spatial resolution, 60Hz sampling rate and +/-13μ strain accuracy. Furthermore, we considered flight test to verify the validity of the BOCDA system. From these results, it was confirmed that BOCDA system has potential to be applied to an aircraft structure health monitoring system.

  11. On-board packet switch architectures for communication satellites

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Quintana, Jorge A.

    1993-01-01

    NASA Lewis Research Center is developing an on-board information switching processor for a multichannel communications signal processing satellite. The information switching processor is a flexible, high-throughput, fault tolerant, on-board baseband packet switch used to route user data among user ground terminals. Through industry study contracts and in-house investigations, several packet switching architectures were examined for possible implementation. Three contention-free switching architectures were studied in detail, namely the shared memory approach, the shared bus approach, and the shared memory per beam approach. These three switching architectures are discussed and the advantages and disadvantages of each approach are examined.

  12. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...” engine conditions present at the time must be stored in computer memory. Should a subsequent fuel system... less must be equipped with an on-board diagnostic (OBD) system capable of monitoring all emission... Administrator. (2) An OBD system demonstrated to fully meet the requirements in § 86.1806-05 may be used to...

  13. Economic Comparison of On-Board Module Builder Harvest Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cotton pickers with on-board module builders (OBMB) eliminates the need for boll buggies, module builders, the tractors, and labor needed to operate this machinery. Additionally, field efficiency may be increased due to less stoppage for unloading and/or waiting to unload. This study estimates the ...

  14. Intelligent Sensors and Components for On-Board ISHM

    NASA Technical Reports Server (NTRS)

    Figueroa, Jorge; Morris, Jon; Nickles, Donald; Schmalzel, Jorge; Rauth, David; Mahajan, Ajay; Utterbach, L.; Oesch, C.

    2006-01-01

    A viewgraph presentation on the development of intelligent sensors and components for on-board Integrated Systems Health Health Management (ISHM) is shown. The topics include: 1) Motivation; 2) Integrated Systems Health Management (ISHM); 3) Intelligent Components; 4) IEEE 1451; 5)Intelligent Sensors; 6) Application; and 7) Future Directions

  15. 40 CFR 86.005-17 - On-board diagnostics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false On-board diagnostics. 86.005-17 Section 86.005-17 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES General Provisions for Emission Regulations for 1977 and Later Model...

  16. 5. TOOL ROOM, NOTE TOOL OUTLINES ON BOARD. Looking east ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. TOOL ROOM, NOTE TOOL OUTLINES ON BOARD. Looking east at tool board. - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Shop Building for Test Stand 1-5, Test Area 1-115, northwest end of Saturn Boulevard, Boron, Kern County, CA

  17. On-Board Software Reference Architecture for Payloads

    NASA Astrophysics Data System (ADS)

    Bos, Victor; Trcka, Adam

    2015-09-01

    This abstract summarizes the On-board Reference Architecture for Payloads activity carried out by Space Systems Finland (SSF) and Evolving Systems Consulting (ESC) under ESA contract. At the time of writing, the activity is ongoing. This abstract discusses study objectives, related activities, study approach, achieved and anticipated results, and directions for future work.

  18. Digital tomosynthesis with an on-board kilovoltage imaging device

    SciTech Connect

    Godfrey, Devon J. . E-mail: devon.godfrey@duke.edu; Yin, F.-F.; Oldham, Mark; Yoo, Sua; Willett, Christopher

    2006-05-01

    Purpose: To generate on-board digital tomosynthesis (DTS) and reference DTS images for three-dimensional image-guided radiation therapy (IGRT) as an alternative to conventional portal imaging or on-board cone-beam computed tomography (CBCT). Methods and Materials: Three clinical cases (prostate, head-and-neck, and liver) were selected to illustrate the capabilities of on-board DTS for IGRT. Corresponding reference DTS images were reconstructed from digitally reconstructed radiographs computed from planning CT image sets. The effect of scan angle on DTS slice thickness was examined by computing the mutual information between coincident CBCT and DTS images, as the DTS scan angle was varied from 0{sup o} to 165{sup o}. A breath-hold DTS acquisition strategy was implemented to remove respiratory motion artifacts. Results: Digital tomosynthesis slices appeared similar to coincident CBCT planes and yielded substantially more anatomic information than either kilovoltage or megavoltage radiographs. Breath-hold DTS acquisition improved soft-tissue visibility by suppressing respiratory motion. Conclusions: Improved bony and soft-tissue visibility in DTS images is likely to improve target localization compared with radiographic verification techniques and might allow for daily localization of a soft-tissue target. Breath-hold DTS is a potential alternative to on-board CBCT for sites prone to respiratory motion.

  19. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  20. High Performance Diesel Fueled Cabin Heater

    SciTech Connect

    Butcher, Tom

    2001-08-05

    Recent DOE-OHVT studies show that diesel emissions and fuel consumption can be greatly reduced at truck stops by switching from engine idle to auxiliary-fired heaters. Brookhaven National Laboratory (BNL) has studied high performance diesel burner designs that address the shortcomings of current low fire-rate burners. Initial test results suggest a real opportunity for the development of a truly advanced truck heating system. The BNL approach is to use a low pressure, air-atomized burner derived form burner designs used commonly in gas turbine combustors. This paper reviews the design and test results of the BNL diesel fueled cabin heater. The burner design is covered by U.S. Patent 6,102,687 and was issued to U.S. DOE on August 15, 2000.The development of several novel oil burner applications based on low-pressure air atomization is described. The atomizer used is a pre-filming, air blast nozzle of the type commonly used in gas turbine combustion. The air pressure used can b e as low as 1300 Pa and such pressure can be easily achieved with a fan. Advantages over conventional, pressure-atomized nozzles include ability to operate at low input rates without very small passages and much lower fuel pressure requirements. At very low firing rates the small passage sizes in pressure swirl nozzles lead to poor reliability and this factor has practically constrained these burners to firing rates over 14 kW. Air atomization can be used very effectively at low firing rates to overcome this concern. However, many air atomizer designs require pressures that can be achieved only with a compressor, greatly complicating the burner package and increasing cost. The work described in this paper has been aimed at the practical adaptation of low-pressure air atomization to low input oil burners. The objective of this work is the development of burners that can achieve the benefits of air atomization with air pressures practically achievable with a simple burner fan.

  1. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  2. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  3. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  4. Some design considerations for high-performance infrared imaging seeker

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Huang, Jianxiong

    2015-10-01

    In recent years, precision guided weapons play more and more important role in modern war. The development and applications of infrared imaging guidance technology have been paid more and more attention. And with the increasing of the complexity of mission and environment, precision guided weapons make stricter demand for infrared imaging seeker. The demands for infrared imaging seeker include: high detection sensitivity, large dynamic range, having better target recognition capability, having better anti-jamming capability and better environment adaptability. To meet the strict demand of weapon system, several important issues should be considered in high-performance infrared imaging seeker design. The mission, targets, environment of infrared imaging guided missile must be regarded. The tradeoff among performance goal, design parameters, infrared technology constraints and missile constraints should be considered. The optimized application of IRFPA and ATR in complicated environment should be concerned. In this paper, some design considerations for high-performance infrared imaging seeker were discussed.

  5. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.

  6. On-board processing concepts for future satellite communications systems

    NASA Astrophysics Data System (ADS)

    Brandon, W. T.; White, B. E.

    1980-05-01

    The initial definition of on-board processing for an advanced satellite communications system to service domestic markets in the 1990's is discussed. An exemplar system with both RF on-board switching and demodulation/remodulation baseband processing is used to identify important issues related to system implementation, cost, and technology development. Analyses of spectrum-efficient modulation, coding, and system control techniques are summarized. Implementations for an RF switch and baseband processor are described. Among the major conclusions listed is the need for high gain satellites capable of handling tens of simultaneous beams for the efficient reuse of the 2.5 GHz 30/20 frequency band. Several scanning beams are recommended in addition to the fixed beams. Low power solid state 20 GHz GaAs FET power amplifiers in the 5W range and a general purpose digital baseband processor with gigahertz logic speeds and megabits of memory are also recommended.

  7. On-board ephemeris representation for Topex/Poseidon

    NASA Technical Reports Server (NTRS)

    Salama, Ahmed H.

    1990-01-01

    The Topex/Poseidon satellite requires real-time on-board knowledge of the satellite and TDRS ephemeris for attitude determination and control and High-Gain Antenna (HGA) pointing. The ephemeris representation concept for the MMS (Multimission Modular Spacecraft) satellites has shown that compressing the predicted ephemeris in a Fourier Power Series (FPS) before uplinking in conjunction with the On-Board Computer (OBC) ephemeris reconstruction algorithms is an efficient technique for ephemeris representation. As an MMS-based satellite, Topex/Poseidon has inherited the Landsat ephemeris representation concept including a daily FPS upload. This paper presents the Topex/Poseidon concept, analysis, and results including the conclusion that the ephemeris representation duration could be extended to 10 days or more and convenient weekly uploading is adopted without an increase in OBC memory requirements.

  8. On-Board Switching and Routing Advanced Technology Study

    NASA Technical Reports Server (NTRS)

    Yegenoglu, F.; Inukai, T.; Kaplan, T.; Redman, W.; Mitchell, C.

    1998-01-01

    Future satellite communications is expected to be fully integrated into National and Global Information Infrastructures (NII/GII). These infrastructures will carry multi gigabit-per-second data rates, with integral switching and routing of constituent data elements. The satellite portion of these infrastructures must, therefore, be more than pipes through the sky. The satellite portion will also be required to perform very high speed routing and switching of these data elements to enable efficient broad area coverage to many home and corporate users. The technology to achieve the on-board switching and routing must be selected and developed specifically for satellite application within the next few years. This report presents evaluation of potential technologies for on-board switching and routing applications.

  9. Autonomous On-Board Calibration of Attitude Sensors and Gyros

    NASA Technical Reports Server (NTRS)

    Pittelkau, Mark E.

    2007-01-01

    This paper presents the state of the art and future prospects for autonomous real-time on-orbit calibration of gyros and attitude sensors. The current practice in ground-based calibration is presented briefly to contrast it with on-orbit calibration. The technical and economic benefits of on-orbit calibration are discussed. Various algorithms for on-orbit calibration are evaluated, including some that are already operating on board spacecraft. Because Redundant Inertial Measurement Units (RIMUs, which are IMUs that have more than three sense axes) are almost ubiquitous on spacecraft, special attention will be given to calibration of RIMUs. In addition, we discuss autonomous on board calibration and how it may be implemented.

  10. On-board processing concepts for future satellite communications systems

    NASA Technical Reports Server (NTRS)

    Brandon, W. T. (Editor); White, B. E. (Editor)

    1980-01-01

    The initial definition of on-board processing for an advanced satellite communications system to service domestic markets in the 1990's is discussed. An exemplar system with both RF on-board switching and demodulation/remodulation baseband processing is used to identify important issues related to system implementation, cost, and technology development. Analyses of spectrum-efficient modulation, coding, and system control techniques are summarized. Implementations for an RF switch and baseband processor are described. Among the major conclusions listed is the need for high gain satellites capable of handling tens of simultaneous beams for the efficient reuse of the 2.5 GHz 30/20 frequency band. Several scanning beams are recommended in addition to the fixed beams. Low power solid state 20 GHz GaAs FET power amplifiers in the 5W range and a general purpose digital baseband processor with gigahertz logic speeds and megabits of memory are also recommended.

  11. On-Board Processor and Network Maturation for Ariane 6

    NASA Astrophysics Data System (ADS)

    Clavier, Rémi; Sautereau, Pierre; Sangaré, Jérémie; Disson, Benjamin

    2015-09-01

    In the past three years, innovative avionic technologies for Ariane 6 were evaluated in the tail of three main programs involving various stakeholders: FLPP (Future Launcher Preparatory Program, from ESA), AXE (Avionic-X European, formerly Avionique-X, French public R&T program) and CNES R&T program relying on industrial partnerships. In each avionics’ domain, several technologies were compared, analyzed and tested regarding space launchers system expectations and constraints. Within the frame of on-board data handling, two technologies have been identified as promising: ARM based microprocessors for the computing units and TTEthernet for the on-board network. This paper presents the main outcomes of the data handling preparatory activities performed on the AXE platform in Airbus Defence and Space - Les Mureaux.

  12. Octafluoropropane Concentration Dynamics on Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Perry, J. L.

    2003-01-01

    Since activating the International Space Station s (IS9 Service Module in November 2000, archival air quality samples have shown highly variable concentrations of octafluoropropane in the cabin. This variability has been directly linked to leakage from air conditioning systems on board the Service Module, Zvezda. While octafluoro- propane is not highly toxic, it presents a significant chal- lenge to the trace contaminant control systems. A discussion of octafluoropropane concentration dynamics is presented and the ability of on board trace contami- nant control systems to effectively remove octafluoropro- pane from the cabin atmosphere is assessed. Consideration is given to operational and logistics issues that may arise from octafluoropropane and other halo- carbon challenges to the contamination control systems as well as the potential for effecting cabin air quality.

  13. EMI Standards for Wireless Voice and Data on Board Aircraft

    NASA Technical Reports Server (NTRS)

    Ely, Jay J.; Nguyen, Truong X.

    2002-01-01

    The use of portable electronic devices (PEDs) on board aircraft continues to be an increasing source of misunderstanding between passengers and flight-crews, and consequently, an issue of controversy between wireless product manufacturers and air transport regulatory authorities. This conflict arises primarily because of the vastly different regulatory objectives between commercial product and airborne equipment standards for avoiding electromagnetic interference (EMI). This paper summarizes international regulatory limits and test processes for measuring spurious radiated emissions from commercially available PEDs, and compares them to international standards for airborne equipment. The goal is to provide insight for wireless product developers desiring to extend the freedom of their customers to use wireless products on-board aircraft, and to identify future product characteristics, test methods and technologies that may facilitate improved wireless freedom for airline passengers.

  14. Passengers' perception of the safety demonstration on board an aircraft

    NASA Astrophysics Data System (ADS)

    Ruenruoy, Ratchada

    The cabin safety demonstration on board an aircraft is one of the methods to provide safety information for passengers before aircraft takeoff. However, passengers' enthusiasm toward safety demonstrations is normally low. Therefore, the study of passengers' perception toward safety briefings on board an aircraft is important in increasing the safety awareness for the travelling public on commercial aircraft. A survey was distributed to measure the perceptions of Middle Tennessee State University (MTSU) faculty and staff, Aerospace students, and international students who have traveled in the last year. It was generally found that watching the cabin safety demonstration before aircraft takeoff was believed to be important for passengers. However, the attention to the safety demonstration remained low because the safety briefings were not good enough in terms of clear communication, particularly in the recorded audio demonstration and the live safety demonstration methods of briefing.

  15. On-Board Perception System For Planetary Aerobot Balloon Navigation

    NASA Technical Reports Server (NTRS)

    Balaram, J.; Scheid, Robert E.; T. Salomon, Phil

    1996-01-01

    NASA's Jet Propulsion Laboratory is implementing the Planetary Aerobot Testbed to develop the technology needed to operate a robotic balloon aero-vehicle (Aerobot). This earth-based system would be the precursor for aerobots designed to explore Venus, Mars, Titan and other gaseous planetary bodies. The on-board perception system allows the aerobot to localize itself and navigate on a planet using information derived from a variety of celestial, inertial, ground-imaging, ranging, and radiometric sensors.

  16. High performance bio-integrated devices

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  17. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  18. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  19. Intelligent Optimisation of Microsatellite On-Board Power Systems

    NASA Astrophysics Data System (ADS)

    Ballester-Gúrpide, Íñigo; da Silva-Curiel, R. A.; Sweeting, Martin

    2000-07-01

    The Surrey Space Centre has pioneered the research and development of modern microsatellites technologies over the last twenty years. Obviously, the small volume of these satellites places severe constrains on power available for applications payloads on board the satellite therefore many research projects at Surrey have been carried out to reduce power consumption of both satellite platform and payloads. This paper describes one of these projects which is the use of an automatic power scheduling algorithm for the PICOSAT mission, a micro-satellite developed by SSTL under contract to the USAF SSP (Small Satellite Programme). The purpose of this algorithm is to predict the battery and memory levels on a short term basis and automatically schedule the on-board experiments activities in order to optimise the power and memory usage over the selected period, meeting the constraints and requirements for the mission. The algorithm makes use of a recursive feedback loop to reach the optimum output. An initial prototype of the algorithm has been implemented using matlab and, once fully tested, it is intended to port and run it on the satellite On Board Computer in orbit. The experiment priorities and payload characteristics are specified in separate modules, allowing the easy re-use and upgrade of the algorithm for different payload configurations under other SSTL satellites.

  20. On-board hydrogen storage system using metal hydride

    SciTech Connect

    Heung, L.K.

    1997-07-01

    A hydrogen powered hybrid electric bus has been developed for demonstration in normal city bus service in the City of Augusta, Georgia, USA. The development team, called H2Fuel Bus Team, consists of representatives from government, industry and research institutions. The bus uses hydrogen to fuel an internal combustion engine which drives an electric generator. The generator charges a set of batteries which runs the electric bus. The hydrogen fuel and the hybrid concept combine to achieve the goal of near-zero emission and high fuel efficiency. The hydrogen fuel is stored in a solid form using an on-board metal hydride storage system. The system was designed for a hydrogen capacity of 25 kg. It uses the engine coolant for heat to generate a discharge pressure higher than 6 atm. The operation conditions are temperature from ambient to 70 degrees C, hydrogen discharge rate to 6 kg/hr, and refueling time 1.5 hours. Preliminary tests showed that the performance of the on-board storage system exceeded the design requirements. Long term tests have been planned to begin in 2 months. This paper discusses the design and performance of the on-board hydrogen storage system.

  1. Technical feasibility of an ROV with on-board power

    SciTech Connect

    Sayer, P.; Bo, L.

    1994-12-31

    An ROI`s electric power, control and communication signals are supplied from a surface ship or platform through an umbilical cable. Though cable design has evolved steadily, there are still severe limitations such as heavy weight and cost. It is well known that the drag imposed by the cable limits the operational range of the ROV in deep water. On the other hand, a cable-free AUV presents problems in control, communication and transmission of data. Therefore, an ROV with on-board and small-diameter cable could offer both a large operating range (footprint) and real-time control. This paper considers the feasibility of such an ROV with on-board power, namely a Self-Powered ROV (SPROV). The selection of possible power sources is first discussed before comparing the operational performance of an SPROV against a conventional ROV. It is demonstrated how an SPROV with a 5mm diameter tether offers a promising way forward, with on-board power of up to 40 kW over 24 hours. In water depths greater than 50m the reduced drag of the SPROV tether is very advantageous.

  2. High performance protection circuit for power electronics applications

    NASA Astrophysics Data System (ADS)

    Tudoran, Cristian D.; Dǎdârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-01

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a "sensor" or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  3. High performance hybrid magnetic structure for biotechnology applications

    DOEpatents

    Humphries, David E.; Pollard, Martin J.; Elkin, Christopher J.

    2009-02-03

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are further improvements to aspects of the hybrid magnetic structure, including additional elements and for adapting the use of the hybrid magnetic structure for use in biotechnology and high throughput processes.

  4. High Performance Computing with Harness over InfiniBand

    SciTech Connect

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  5. High performance protection circuit for power electronics applications

    SciTech Connect

    Tudoran, Cristian D. Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  6. High performance computing at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  7. Files in Space: Management & Transfer Applications and Operations: On-Board Segment & Communications

    NASA Astrophysics Data System (ADS)

    Dellandrea, Brice; De-Ferluc, Regid; Fourtier, Philippe; Soumagne, Raymond

    2013-08-01

    File structures are increasingly used on space missions both for payload science/observation data storage and for platform telecommands and housekeeping telemetries. On current missions, these file structures are specifically designed on a case-by-case basis with adapted communication protocols for Board/Ground file transfer and handling. Two studies on that field have been undergone by Thales Alenia Space for CNES on the last two years: a study in 2011/2012 on the file transfer systems (state of the art, trade-off and prototyping), and a second in 2012/2013 on the file management systems (specification and prototyping). The second study is still on-going, aiming at designing a generic file management system to suite general file implementations, compatible with the current file transfer protocols and covering the current and anticipated needs. This presentation provides some of the results of these studies and highlights the impact of file operations on board/ground communication sessions and on-board avionics & flight software.

  8. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  9. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  10. Co-design for high performance computing.

    SciTech Connect

    Dosanjh, Sudip Singh; Hemmert, Karl Scott; Rodrigues, Arun F.

    2010-07-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  11. High Poverty, High Performing Schools. IDRA Focus.

    ERIC Educational Resources Information Center

    IDRA Newsletter, 1997

    1997-01-01

    This theme issue includes four articles on high performance by poor Texas schools. In "Principal of National Blue Ribbon School Says High Poverty Schools Can Excel" (interview with Robert Zarate by Christie L. Goodman), the principal of Mary Hull Elementary School (San Antonio, Texas) describes how the high-poverty, high-minority school…

  12. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  13. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  14. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  15. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  16. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  17. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  18. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  19. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems. PMID:26872796

  20. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  1. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  2. iSAFT Protocol Validation Platform for On-Board Data Networks

    NASA Astrophysics Data System (ADS)

    Tavoularis, Antonis; Marinis, Kostas; Kollias, Vangelis

    2014-08-01

    iSAFT is an integrated powerful HW/SW environment for the simulation, validation & monitoring of satellite/spacecraft on-board data networks supporting simultaneously a wide range of protocols (RMAP, PTP, CCSDS Space Packet, TM/TC, CANopen, etc.) and network interfaces (SpaceWire, ECSS MIL-STD-1553, ECSS CAN). It is based on over 20 years of TELETEL's experience in the area of protocol validation in the telecommunications and aeronautical sectors, and it has been fully re-engineered in cooperation of TELETEL with ESA & space Primes, to comply with space on-board industrial validation requirements (ECSS, EGSE, AIT, AIV, etc.). iSAFT is highly modular and expandable to support new network interfaces & protocols and it is based on the powerful iSAFT graphical tool chain (Protocol Analyser /Recorder, TestRunner, Device Simulator, Traffic Generator, etc.). iSAFT can be used for the validation of units used in specific scientific missions, like the GAIA Video Processing Unit, which generate large volumes of data and validation can become very demanding. For these cases both the recording and the simulation exceed the performances of many existing test systems and test equipment is parallelized leading to complex EGSE architectures and generating SW synchronization issues. This paper presents the functional and performance characteristics of two instances of the iSAFT system, the iSAFT Recorder and iSAFT Simulator Traffic Generation engine. The main objective of the work presented in this paper was carried out in the frame of ESTEC Contract no. 4000105444/12/NL/CBI [titled "Protocol Validation System (PVS) activity"] and the results prove that, for both recording and simulation, iSAFT can be trusted even in missions with very high performance requirements.

  3. On-board fault management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Doyle, Susan C.; Martin, Eric; Sellers, Suzanne

    1991-01-01

    The dynamic nature of the Cargo Transfer Vehicle's (CTV) mission and the high level of autonomy required mandate a complete fault management system capable of operating under uncertain conditions. Such a fault management system must take into account the current mission phase and the environment (including the target vehicle), as well as the CTV's state of health. This level of capability is beyond the scope of current on-board fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems. This presentation will discuss work in progress at TRW to apply artificial intelligence to the problem of on-board fault management. The goal of this work is to develop fault management systems that can meet the needs of spacecraft that have long-range autonomy requirements. We have implemented a model-based approach to fault detection and isolation that does not require explicit characterization of failures prior to launch. It is thus able to detect failures that were not considered in the failure and effects analysis. We have applied this technique to several different subsystems and tested our approach against both simulations and an electrical power system hardware testbed. We present findings from simulation and hardware tests which demonstrate the ability of our model-based system to detect and isolate failures, and describe our work in porting the Ada version of this system to a flight-qualified processor. We also discuss current research aimed at expanding our system to monitor the entire spacecraft.

  4. MODIS On-Board Blackbody Function and Performance

    NASA Technical Reports Server (NTRS)

    Xiaoxiong, Xiong; Wenny, Brian N.; Wu, Aisheng; Barnes, William

    2009-01-01

    Two MODIS instruments are currently in orbit, making continuous global observations in visible to long-wave infrared wavelengths. Compared to heritage sensors, MODIS was built with an advanced set of on-board calibrators, providing sensor radiometric, spectral, and spatial calibration and characterization during on-orbit operation. For the thermal emissive bands (TEB) with wavelengths from 3.7 m to 14.4 m, a v-grooved blackbody (BB) is used as the primary calibration source. The BB temperature is accurately measured each scan (1.47s) using a set of 12 temperature sensors traceable to NIST temperature standards. The onboard BB is nominally operated at a fixed temperature, 290K for Terra MODIS and 285K for Aqua MODIS, to compute the TEB linear calibration coefficients. Periodically, its temperature is varied from 270K (instrument ambient) to 315K in order to evaluate and update the nonlinear calibration coefficients. This paper describes MODIS on-board BB functions with emphasis on on-orbit operation and performance. It examines the BB temperature uncertainties under different operational conditions and their impact on TEB calibration and data product quality. The temperature uniformity of the BB is also evaluated using TEB detector responses at different operating temperatures. On-orbit results demonstrate excellent short-term and long-term stability for both the Terra and Aqua MODIS on-board BB. The on-orbit BB temperature uncertainty is estimated to be 10mK for Terra MODIS at 290K and 5mK for Aqua MODIS at 285K, thus meeting the TEB design specifications. In addition, there has been no measurable BB temperature drift over the entire mission of both Terra and Aqua MODIS.

  5. Comparison of precise ionising Radiation Dose Measurements on board Aircraft

    NASA Astrophysics Data System (ADS)

    Lindborg, L.; Beck, P.; Bottollier, J. F.; Roos, H.; Spurny, F.; Wissman, F.

    2003-04-01

    The cosmic radiation makes aircrew one of the most exposed occupational groups. The European Council has therefore in its Directive 96/29Euratom on basic safety standards for radiation protection a particular article (42) for the protection of aircrew. One of the measures to be taken is to assess the exposure of the crew. This is, however, not a trivial task. The radiation consists of many different types of radiation with energies that are hardly met on ground. The knowledge on the dose levels on board aircraft has improved gradually during the last decade as several groups around the world have performed measurements on board civil aircraft in cooperation with airlines. Only occasionally has more than one instrument been able to fly at the same time for practical reasons. The statistical uncertainty in a measurement of the dose equivalent rate is typically ±15 % (1 relative standard deviation) if determined during half an hour. Systematic uncertainties add to this. The dose rate depends on flight altitude, geographic coordinates of the flight, the phase of the solar cycle and the prevailing solar wind. For that reason the possibility to fly on the same flight will eliminate some of the systematic uncertainties that limits an evaluation of the measurement techniques. The proposal aims at measurements on board the aircraft on a geographically limited area for a few hours to decrease the statistical uncertainty of the measurements and thereby get an excellent opportunity to look for possible systematic differences between the different measurement systems. As the dose equivalent rate will be quite well established it will also be possible to compare the measured values with calculated ones. The dose rate increases towards the geomagnetic poles and decreases towards the equator. The composition of the radiation components varies also with altitude. For that reason measurements both at southern latitude and at northern latitude are planned.

  6. Poisson's ratio of high-performance concrete

    SciTech Connect

    Persson, B.

    1999-10-01

    This article outlines an experimental and numerical study on Poisson's ratio of high-performance concrete subjected to air or sealed curing. Eight qualities of concrete (about 100 cylinders and 900 cubes) were studied, both young and in the mature state. The concretes contained between 5 and 10% silica fume, and two concretes in addition contained air-entrainment. Parallel studies of strength and internal relative humidity were carried out. The results indicate that Poisson's ratio of high-performance concrete is slightly smaller than that of normal-strength concrete. Analyses of the influence of maturity, type of aggregate, and moisture on Poisson's ratio are also presented. The project was carried out from 1991 to 1998.

  7. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  8. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  9. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  10. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 5 2014-10-01 2014-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  11. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 5 2012-10-01 2012-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  12. 46 CFR 147.15 - Hazardous ships' stores permitted on board vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 5 2013-10-01 2013-10-01 false Hazardous ships' stores permitted on board vessels. 147... HAZARDOUS SHIPS' STORES General Provisions § 147.15 Hazardous ships' stores permitted on board vessels. Unless prohibited under subpart B of this part, any hazardous material may be on board a vessel as...

  13. 47 CFR 80.413 - On-board station equipment records.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SERVICES STATIONS IN THE MARITIME SERVICES Station Documents § 80.413 On-board station equipment records. (a) The licensee of an on-board station must keep equipment records which show: (1) The ship name and... 47 Telecommunication 5 2012-10-01 2012-10-01 false On-board station equipment records....

  14. 47 CFR 80.413 - On-board station equipment records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SERVICES STATIONS IN THE MARITIME SERVICES Station Documents § 80.413 On-board station equipment records. (a) The licensee of an on-board station must keep equipment records which show: (1) The ship name and... 47 Telecommunication 5 2014-10-01 2014-10-01 false On-board station equipment records....

  15. 47 CFR 80.413 - On-board station equipment records.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SERVICES STATIONS IN THE MARITIME SERVICES Station Documents § 80.413 On-board station equipment records. (a) The licensee of an on-board station must keep equipment records which show: (1) The ship name and... 47 Telecommunication 5 2013-10-01 2013-10-01 false On-board station equipment records....

  16. 47 CFR 80.413 - On-board station equipment records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES STATIONS IN THE MARITIME SERVICES Station Documents § 80.413 On-board station equipment records. (a) The licensee of an on-board station must keep equipment records which show: (1) The ship name and... 47 Telecommunication 5 2011-10-01 2011-10-01 false On-board station equipment records....

  17. 47 CFR 80.413 - On-board station equipment records.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES STATIONS IN THE MARITIME SERVICES Station Documents § 80.413 On-board station equipment records. (a) The licensee of an on-board station must keep equipment records which show: (1) The ship name and... 47 Telecommunication 5 2010-10-01 2010-10-01 false On-board station equipment records....

  18. On-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas W.; Forden, Josephine K.; Henretty, Debra A.; Lightsey, E. Glenn; Markley, F. Landis

    1990-01-01

    Algorithms for onboard attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) were developed. The algorithms include spacecraft ephemeris and geomagnetic field models, attitude determination with 2 degree accuracy, control of pitch axis pointing to the sun and yaw axis pointing away from the Earth to achieve control of pitch axis within 5 degrees of sunline, momentum unloading, and nutation damping. The closed loop simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  19. Piloted simulation of an on-board trajectory optimization algorithm

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Calise, A. J.; Moerder, D. D.

    1981-01-01

    This paper will describe a real time piloted simulation of algorithms designed for on-board computation of time-optimal intercept trajectories for an F-8 aircraft. The algorithms, which were derived using singular perturbation theory, generate commands that are displayed to the pilot on flight director needles on the 8-ball. By flying the airplane so as to zero the horizontal and vertical needles, the pilot flies an approximation to a time-optimal intercept trajectory. The various display and computation modes that are available will be described and results will be presented illustrating the performance of the algorithms with a pilot in the loop.

  20. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  1. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  2. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  3. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  4. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  5. Expert system for on-board satellite scheduling and control

    NASA Technical Reports Server (NTRS)

    Barry, John M.; Sary, Charisse

    1988-01-01

    An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.

  6. On-board data management study for EOPAP

    NASA Technical Reports Server (NTRS)

    Davisson, L. D.

    1975-01-01

    The requirements, implementation techniques, and mission analysis associated with on-board data management for EOPAP were studied. SEASAT-A was used as a baseline, and the storage requirements, data rates, and information extraction requirements were investigated for each of the following proposed SEASAT sensors: a short pulse 13.9 GHz radar, a long pulse 13.9 GHz radar, a synthetic aperture radar, a multispectral passive microwave radiometer facility, and an infrared/visible very high resolution radiometer (VHRR). Rate distortion theory was applied to determine theoretical minimum data rates and compared with the rates required by practical techniques. It was concluded that practical techniques can be used which approach the theoretically optimum based upon an empirically determined source random process model. The results of the preceding investigations were used to recommend an on-board data management system for (1) data compression through information extraction, optimal noiseless coding, source coding with distortion, data buffering, and data selection under command or as a function of data activity, (2) for command handling, (3) for spacecraft operation and control, and (4) for experiment operation and monitoring.

  7. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  8. Rotating pressure measurement system using an on board calibration standard

    NASA Technical Reports Server (NTRS)

    Senyitko, Richard G.; Blumenthal, Philip Z.; Freedman, Robert J.

    1991-01-01

    A computer-controlled multichannel pressure measurement system was developed to acquire detailed flow field measurements on board the Large Low Speed Centrifugal Compressor Research Facility at the NASA Lewis Research Center. A pneumatic slip ring seal assembly is used to transfer calibration pressures to a reference standard transducer on board the compressor rotor in order to measure very low differential pressures with the high accuracy required. A unique data acquisition system was designed and built to convert the analog signal from the reference transducer to the variable frequency required by the multichannel pressure measurement system and also to provide an output for temperature control of the reference transducer. The system also monitors changes in test cell barometric pressure and rotating seal leakage and provides an on screen warning to the operator if limits are exceeded. The methods used for the selection and testing of the the reference transducer are discussed, and the data acquisition system hardware and software design are described. The calculated and experimental data for the system measurement accuracy are also presented.

  9. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  10. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  11. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  12. High performance forward swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Koenig, David G. (Inventor); Aoyagi, Kiyoshi (Inventor); Dudley, Michael R. (Inventor); Schmidt, Susan B. (Inventor)

    1988-01-01

    A high performance aircraft capable of subsonic, transonic and supersonic speeds employs a forward swept wing planform and at least one first and second solution ejector located on the inboard section of the wing. A high degree of flow control on the inboard sections of the wing is achieved along with improved maneuverability and control of pitch, roll and yaw. Lift loss is delayed to higher angles of attack than in conventional aircraft. In one embodiment the ejectors may be advantageously positioned spanwise on the wing while the ductwork is kept to a minimum.

  13. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  14. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  15. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  16. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  17. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  18. Portability Support for High Performance Computing

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    While a large number of tools have been developed to support application portability, high performance application developers often prefer to use vendor-provided, non-portable programming interfaces. This phenomena indicates the mismatch between user priorities and tool capabilities. This paper summarizes the results of a user survey and a developer survey. The user survey has revealed the user priorities and resulted in three criteria for evaluating tool support for portability. The developer survey has resulted in the evaluation of portability support and indicated the possibilities and difficulties of improvements.

  19. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  20. On-board Payload Data Processing from Earth to Space Segment

    NASA Astrophysics Data System (ADS)

    Tragni, M.; Abbattista, C.; Amoruso, L.; Cinquepalmi, L.; Bgongiari, F.; Errico, W.

    2013-09-01

    GS algorithms to approach the problem in the Space scenario, i.e. for Synthetic Aperture Radar (SAR) application the typical focalization of the raw image needs to be improved to be effectively in this context. Many works are actually available on that, the authors have developed a specific ones for neural network algorithms. By the information directly "acquired" (so computed) on-board and without intervention of typical ground systems facilities, the spacecraft can take autonomously decision regarding a re-planning of acquisition for itself (at high performance modalities) or other platforms in constellation or affiliated reducing the time elapse as in the nowadays approach. For no EO missions it is big advantage to reduce the large round trip flight of transmission. In general the saving of resources is extensible to memory and RF transmission band resources, time reaction (like civil protection applications), etc. enlarging the flexibility of missions and improving the final results. SpacePDP main HW and SW characteristics: • Compactness: size and weight of each module are fitted in a Eurocard 3U 8HP format with «Inter-Board» connection through cPCI peripheral bus. • Modularity: the Payload is usually composed by several sub-systems. • Flexibility: coprocessor FPGA, on-board memory and support avionic protocols are flexible, allowing different modules customization according to mission needs • Completeness: the two core boards (CPU and Companion) are enough to obtain a first complete payload data processing system in a basic configuration. • Integrability: The payload data processing system is open to accept custom modules to be connected on its open peripheral bus. • CPU HW module (one or more) based on a RISC processor (LEON2FT, a SPARC V8 architecture, 80Mips @100MHz on ASIC ATMEL AT697F) • DSP HW module (optional with more instances) based on a FPGA dedicated architecture to ensure an effective multitasking control and to offer high numerical

  1. Achieving high performance on the Intel Paragon

    SciTech Connect

    Greenberg, D.S.; Maccabe, B.; Riesen, R.; Wheat, S.; Womble, D.

    1993-11-01

    When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from the nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.

  2. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  3. Management issues for high performance storage systems

    SciTech Connect

    Louis, S.; Burris, R.

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  4. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  6. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  7. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  8. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  9. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  10. Development of the On-board Aircraft Network

    NASA Technical Reports Server (NTRS)

    Green, Bryan D. W.; Mezu, Okechukwu A.

    2004-01-01

    Phase II will focus on the development of the on-board aircraft networking portion of the testbed which includes the subnet and router configuration and investigation of QoS issues. This implementation of the testbed will consist of a workstation, which functions as the end system, connected to a router. The router will service two subnets that provide data to the cockpit and the passenger cabin. During the testing, data will be transferred between the end systems and those on both subnets. QoS issues will be identified and a preliminary scheme will be developed. The router will be configured for the testbed network and initial security studies will be initiated. In addition, architecture studies of both the SITA and Immarsat networks will be conducted.

  11. The meteorological data distribution mission on board SIRIO satellite

    NASA Astrophysics Data System (ADS)

    Sartini, L.

    The SIRIO 2 meteorological data distributor mission and the related telecommunication package, comprising an S band repeater and a despun antenna system assembly are described. The repeater assembly receives and amplifies signals via a low noise amplifier, down converts them to an intermediate frequency (IF) and amplifies these signals at IF, to generate a TLM carrier phase modulated by the PCM signals from the on board telemetry encoder. It sums the IF signal and the TLM carrier and up converts them to an S band frequency, and provides power amplification of the composite signal for retransmission. The despun antenna system functions are: provide a beamwidth at the reveive and transmit frequency bands sufficient to meet the coverage requirements; provide gain over the coverage at the receive and transmit frequencies; and acquire and maintain the pointing of the beam towards Earth during all mission lifetime by despinning the antenna structure with a rotary motion having the same satellite angular speed but opposite direction.

  12. DAMPE silicon tracker on-board data compression algorithm

    NASA Astrophysics Data System (ADS)

    Dong, Yi-Fan; Zhang, Fei; Qiao, Rui; Peng, Wen-Xi; Fan, Rui-Rui; Gong, Ke; Wu, Di; Wang, Huan-Yu

    2015-11-01

    The Dark Matter Particle Explorer (DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic ray detection. The silicon tracker (STK) is a subdetector of the DAMPE payload. It has excellent position resolution (readout pitch of 242 μm), and measures the incident direction of particles as well as charge. The STK consists of 12 layers of Silicon Micro-strip Detector (SMD), equivalent to a total silicon area of 6.5 m2. The total number of readout channels of the STK is 73728, which leads to a huge amount of raw data to be processed. In this paper, we focus on the on-board data compression algorithm and procedure in the STK, and show the results of initial verification by cosmic-ray measurements. Supported by Strategic Priority Research Program on Space Science of Chinese Academy of Sciences (XDA040402) and National Natural Science Foundation of China (1111403027)

  13. An on board microcomputer for a space instrument

    NASA Astrophysics Data System (ADS)

    Strommer, Esko

    An on-board microcomputer for a plasma momentum and ion composition analyzer is designed and constructed by the Technical Research Centre of Finland and the Finnish Meteorological Institute. The plasma analyzer is designed primarily by Kiruna Geophysical Institute in Sweden and is expected to be launched in the Phobos spacecraft of USSR towards Mars in 1988. The microcomputer is used for handling the measurement results and controlling the plasma analyzer. The main parts of the microcomputer are a 16-bit microprocessor, a program memory of 8 kilowords, a RAM-memory of 16 kilowords, and a watchdog-based self control. The reliability aspects and the special requirements of the space environment are taken into account in the design.

  14. Advanced On-Board Processor (AOP). [for future spacecraft applications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Advanced On-board Processor the (AOP) uses large scale integration throughout and is the most advanced space qualified computer of its class in existence today. It was designed to satisfy most spacecraft requirements which are anticipated over the next several years. The AOP design utilizes custom metallized multigate arrays (CMMA) which have been designed specifically for this computer. This approach provides the most efficient use of circuits, reduces volume, weight, assembly costs and provides for a significant increase in reliability by the significant reduction in conventional circuit interconnections. The required 69 CMMA packages are assembled on a single multilayer printed circuit board which together with associated connectors constitutes the complete AOP. This approach also reduces conventional interconnections thus further reducing weight, volume and assembly costs.

  15. COMSAT Laboratories' on-board baseband switch development

    NASA Technical Reports Server (NTRS)

    Pontano, B. A.; Redman, W. A.; Inukai, Thomas; Razdan, R.; Paul, D. K.

    1991-01-01

    Work performed at COMSAT Laboratories to develop a prototype on-board baseband switch is summarized. The switch design is modular to accommodate different service types, and the architecture features a high-speed optical ring operating at 1 Gbit/s to route input (up-link) channels to output (down-link) channels. The switch is inherently a packet switch, but can process either circuit-switched or packet-switched traffic. If the traffic arrives at the satellite in a circuit-switched mode, the input processor packetizes it and passes it on to the switch. The main advantage of the packet approach lies in its simplified control structure. Details of the switch architecture and design, and the status of its implementation, are presented.

  16. On-board attitude determination for the Topex satellite

    NASA Technical Reports Server (NTRS)

    Dennehy, C. J.; Ha, K.; Welch, R. V.; Kia, T.

    1989-01-01

    This paper presents an overall technical description of the on-board attitude determination system for The Ocean Topography Experiment (Topex) satellite. The stellar-inertial attitude determination system being designed for the Topex satellite utilizes data from a three-axis NASA Standard DRIRU-II as well as data from an Advanced Star Tracer (ASTRA) and a Digital Fine Sun Sensor (DFSS). This system is a modified version of the baseline Multimission Modular Spacecraft (MMS) concept used on the Landsat missions. Extensive simulation and analysis of the MMS attitude determination approach was performed to verify suitability for the Topex application. The modifications to this baseline attitude determination scheme were identified to satisfy the unique Topex mission requirements.

  17. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  18. Predictive NO x emission monitoring on board a passenger ferry

    NASA Astrophysics Data System (ADS)

    Cooper, D. A.; Andreasson, K.

    NO x emissions from a medium speed diesel engine on board a servicing passenger ferry have been indirectly measured using a predictive emission monitoring system (PEMS) over a 1-yr period. Conventional NO x measurements were carried out with a continuous emission monitoring system (CEMS) at the start of the study to provide historical data for the empirical PEMS function. On three other occasions during the year the CEMS was also used to verify the PEMS and follow any changes in emission signature of the engine. The PEMS consisted of monitoring exhaust O 2 concentrations (in situ electrochemical probe), engine load, combustion air temperature and humidity, and barometric pressure. Practical experiences with the PEMS equipment were positive and measurement data were transferred to a land-based office by using a modem data communication system. The initial PEMS function (PEMS1) gave systematic differences of 1.1-6.9% of the calibration domain (0-1725 ppm) and a relative accuracy of 6.7% when compared with CEMS for whole journeys and varying load situations. Further improvements on the performance could be obtained by updating this function. The calculated yearly emission for a total engine running time of 4618 h was 316 t NO x±38 t and the average NO x emission corrected for ambient conditions 14.3 g kWh corr-1. The exhaust profile of the engine in terms of NO x, CO and CO 2 emissions as determined by CEMS was similar for most of the year. Towards the end of the study period, a significantly lower NO x emission was detected which was probably caused by replacement of fuel injector nozzles. The study suggests that PEMS can be a viable option for continuous, long-term NO x measurements on board ships.

  19. Automatic maintenance payload on board of a Mexican LEO microsatellite

    NASA Astrophysics Data System (ADS)

    Vicente-Vivas, Esaú; García-Nocetti, Fabián; Mendieta-Jiménez, Francisco

    2006-02-01

    Few research institutions from Mexico work together to finalize the integration of a technological demonstration microsatellite called Satex, aiming the launching of the first ever fully designed and manufactured domestic space vehicle. The project is based on technical knowledge gained in previous space experiences, particularly in developing GASCAN automatic experiments for NASA's space shuttle, and in some support obtained from the local team which assembled the México-OSCAR-30 microsatellites. Satex includes three autonomous payloads and a power subsystem, each one with a local microcomputer to provide intelligent and dedicated control. It also contains a flight computer (FC) with a pair of full redundancies. This enables the remote maintenance of processing boards from the ground station. A fourth communications payload depends on the flight computer for control purposes. A fifth payload was decided to be developed for the satellite. It adds value to the available on-board computers and extends the opportunity for a developing country to learn and to generate domestic space technology. Its aim is to provide automatic maintenance capabilities for the most critical on-board computer in order to achieve continuous satellite operations. This paper presents the virtual computer architecture specially developed to provide maintenance capabilities to the flight computer. The architecture is periodically implemented by software with a small amount of physical processors (FC processors) and virtual redundancies (payload processors) to emulate a hybrid redundancy computer. Communications among processors are accomplished over a fault-tolerant LAN. This allows a versatile operating behavior in terms of data communication as well as in terms of distributed fault tolerance. Obtained results, payload validation and reliability results are also presented.

  20. Corporate sponsored education initiatives on board the ISS

    NASA Astrophysics Data System (ADS)

    Durham, Ian T.; Durham, Alyson S.; Pawelczyk, James A.; Brod, Lawrence B.; Durham, Thomas F.

    1999-01-01

    This paper proposes the creation of a corporate sponsored ``Lecture from Space'' program on board the International Space Station (ISS) with funding coming from a host of new technology and marketing spin-offs. This program would meld existing education initiatives in NASA with new corporate marketing techniques. Astronauts in residence on board the ISS would conduct short ten to fifteen minute live presentations and/or conduct interactive discussions carried out by a teacher in the classroom. This concept is similar to a program already carried out during the Neurolab mission on Shuttle flight STS-90. Building on that concept, the interactive simulcasts would be broadcast over the Internet and linked directly to computers and televisions in classrooms worldwide. In addition to the live broadcasts, educational programs and demonstrations can be recorded in space, and marketed and sold for inclusion in television programs, computer software, and other forms of media. Programs can be distributed directly into classrooms as an additional presentation supplement, as well as over the Internet or through cable and broadcast television, similar to the Canadian Discovery Channel's broadcasts of the Neurolab mission. Successful marketing and advertisement can eventually lead to the creation of an entirely new, privately run cottage industry involving the distribution and sale of educationally related material associated with the ISS that would have the potential to become truly global in scope. By targeting areas of expertise and research interest in microgravity, a large curriculum could be developed using space exploration as a unifying theme. Expansion of this concept could enhance objectives already initiated through the International Space University to include elementary and secondary school students. The ultimate goal would be to stimulate interest in space and space related sciences in today's youth through creative educational marketing initiatives while at the

  1. Challenge of lightning detection with LAC on board Akatsuki spacecraft

    NASA Astrophysics Data System (ADS)

    Takahashi, Yukihiro; Sato, Mitsutero; Imai, Masataka; Yair, Yoav; Fischer, Georg; Aplin, Karen

    2016-04-01

    Even after extensive investigations with spacecraft and ground-based observations, there is still no consensus on the existence of lightning in Venus. It has been reported that the magnetometer on board Venus Express detected whistler mode waves whose source could be lightning discharge occurring well below the spacecraft. On the other hand, with an infrared sensor, VIRTIS of Venus Express, does not show the positive indication of lightning flashes. In order to identify the optical flashes caused by electrical discharge in the atmosphere of Venus, at least, with an optical intensity of 1/10 of the average lightning in the Earth, we built a high-speed optical detector, LAC (Lightning and Airglow Camera), on board Akatsuki spacecraft. The unique performance of the LAC compared to other instruments is the high-speed sampling rate at 32 us interval for all 32 pixels, enabling us to distinguish the optical lightning flash from other pulsing noises. Though, unfortunately, the first attempt of the insertion of Akatsuki into the orbit around Venus failed in December 2010, the second one carried out in December 7 in 2015 was quite successful. We checked out the condition of the LAC on January 5, 2016, and it is healthy as in 2010. Due to some elongated orbit than that planned originally, we have umbra for ~30 min to observe the lightning flash in the night side of Venus every ~10 days, starting on April 2016. Here we would report the instrumental status of LAC and the preliminary results of the first attempt to observe optical lightning emissions.

  2. The Burst Monitor (GBM) on-board GLAST

    NASA Astrophysics Data System (ADS)

    Georgii, R.

    The Gamma-ray Large-Area Space Telescope (GLAST) will be the next major NASA mission for high-energy γ-ray astronomy after EGRET. Presently the launch is foreseen for the end of 2005. Its scientific objective will be to observe AGNs, pulsars, SN remnants and interactions of cosmic rays with the interstellar medium from 10 MeV to 300 GeV. Another important objective will be the study of γ-ray burst spectra and time profiles at the high-energy end. A Burst Monitor ((GBM) will be on board of GLAST and will be built, by a collaboration of MSFC/UAH and the MPE, to enhance the γ-ray burst-detection capability of GLAST considerably. It will measure burst spectra between 5 keV and 30 MeV with an energy resolution between ≈3% (at 20 MeV) and ≈50% (at 5 keV). Thus an energy range of more than 6 decades will be accessible in burst spectra for the first time. Moreover it will measure the light curves with an absolute time accuracy of 10 μsec. Furthermore the GBM will provide an on-board position to the main instrument for repointing purposes, allowing for an observation of a burst with the main telescope within 10 minutes. Through an energy range similar to that of BASTE continuity with the large data base of γ-ray burst-spectra parameters can be achieved, putting the expected high-energy emission in a better context. In this talk the scientific goals of the GBM and its technical realisation will be presented.

  3. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  4. High-Speed On-Board Data Processing for Science Instruments: HOPS

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  5. High Performance Database Management for Earth Sciences

    NASA Technical Reports Server (NTRS)

    Rishe, Naphtali; Barton, David; Urban, Frank; Chekmasov, Maxim; Martinez, Maria; Alvarez, Elms; Gutierrez, Martha; Pardo, Philippe

    1998-01-01

    The High Performance Database Research Center at Florida International University is completing the development of a highly parallel database system based on the semantic/object-oriented approach. This system provides exceptional usability and flexibility. It allows shorter application design and programming cycles and gives the user control via an intuitive information structure. It empowers the end-user to pose complex ad hoc decision support queries. Superior efficiency is provided through a high level of optimization, which is transparent to the user. Manifold reduction in storage size is allowed for many applications. This system allows for operability via internet browsers. The system will be used for the NASA Applications Center program to store remote sensing data, as well as for Earth Science applications.

  6. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  7. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  8. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  9. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  10. High-performance parallel input device

    NASA Astrophysics Data System (ADS)

    Daniel, R. W.; Fischer, Patrick J.; Hunter, B.

    1993-12-01

    Research into force reflecting remote manipulation has recently started to move away from common error systems towards explicit force control. In order to maximize the benefit provided by explicit force reflection the designer has to take into account the asymmetry of the bandwidths of the forward and reflecting loops. This paper reports on a high performance system designed and built at Oxford University and Harwell Laboratories and on the preliminary results achieved when performing simple force reflecting tasks. The input device is based on a modified Stewart Platform, which offers the potential of very high bandwidth force reflection, well above the normal 2 - 10 Hz range achieved with common error systems. The slave is a nuclear hardened Puma industrial robot, offering a low cost, reliable solution to remote manipulation tasks.

  11. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  12. High performance railgun barrels for laboratory use

    NASA Astrophysics Data System (ADS)

    Bauer, David P.; Newman, Duane C.

    1993-01-01

    High performance low-cost, laboratory railgun barrels are now available, comprised of an inherently stiff containment structure which surrounds the bore components machined from 'off the-shelf' materials. The shape of the containment structure was selected to make the barrel inherently stiff. The structure consists of stainless steel laminations which do not compromise the electrical efficiency of the railgun. The modular design enhances the utility of the barrel, as it is easy to service between shots, and can be 're-cored' to produce different configurations and sizes using the same structure. We have produced barrels ranging from 15 mm to 90 mm square bore, a 30 mm round bore, and in lengths varying from 0.25 meters to 10 meters long. Successful tests with both plasma and solid metal armatures have demonstrated the versatility and performance of this design.

  13. Advanced solidification system using high performance cement

    SciTech Connect

    Kikuchi, Makoto; Matsuda, Masami; Nishi, Takashi; Tsuchiya, Hiroyuki; Izumida, Tatsuo

    1995-12-31

    Advanced cement solidification is proposed for the solidification of radioactive waste such as spent ion exchange resin, incineration ash and liquid waste. A new, high performance cement has been developed to raise volume reduction efficiency and lower radioactivity release into the environment. It consists of slag cement, reinforcing fiber, natural zeolite and lithium nitrate (LiNO{sub 3}). The fiber allows waste loading to be increased from 20 to 55kg-dry resin/200L. The zeolite, whose main constituent is clinoptilolite, reduces cesium leachability from the waste form to about 1/10. Lithium nitrate prevents alkaline corrosion of the aluminum, contained in ash, and reduces hydrogen gas generation. Laboratory and full-scale pilot plant experiments were performed to evaluate properties of the waste form, using simulated wastes. Emphasis was laid on improvement of solidification of spent resin and ash.

  14. Development of a high performance peristaltic micropump

    NASA Astrophysics Data System (ADS)

    Pham, My; Goo, Nam Seo

    2008-03-01

    In this study, a high performance peristaltic micropump has been developed and investigated. The micropump has three cylinder chambers which are connected through micro-channels for high pumping pressure performance. A circular-shaped mini LIPCA has been designed and manufactured for actuating diaphragm. In this LIPCA, a 0.1mm thickness PZT ceramic is used as an active layer. As a result, the actuator has shown to produce large out of plane deflection and consumed low power. During the design process, a coupled field analysis was conducted to predict the actuating behavior of a diaphragm and pumping performance. MEMS technique was used to fabricate the peristaltic micropump. Pumping performance of the present micropump was investigated both numerically and experimentally. The present peristaltic micropump was shown to have higher performance than the same kind of micropump developed else where.

  15. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  16. Creating high performance buildings: Lower energy, better comfort

    NASA Astrophysics Data System (ADS)

    Brager, Gail; Arens, Edward

    2015-03-01

    Buildings play a critical role in the challenge of mitigating and adapting to climate change. It is estimated that buildings contribute 39% of the total U.S. greenhouse gas (GHG) emissions [1] primarily due to their operational energy use, and about 80% of this building energy use is for heating, cooling, ventilating, and lighting. An important premise of this paper is about the connection between energy and comfort. They are inseparable when one talks about high performance buildings. Worldwide data suggests that we are significantly overcooling buildings in the summer, resulting in increased energy use and problems with thermal comfort. In contrast, in naturally ventilated buildings without mechanical cooling, people are comfortable in much warmer temperatures due to shifting expectations and preferences as a result of occupants having a greater degree of personal control over their thermal environment; they have also become more accustomed to variable conditions that closely reflect the natural rhythms of outdoor climate patterns. This has resulted in an adaptive comfort zone that offers significant potential for encouraging naturally ventilated buildings to improve both energy use and comfort. Research on other forms for providing individualized control through low-energy personal comfort systems (desktop fans, foot warmed, and heated and cooled chairs) have also demonstrated enormous potential for improving both energy and comfort performance. Studies have demonstrated high levels of comfort with these systems while ambient temperatures ranged from 64-84°F. Energy and indoor environmental quality are inextricably linked, and must both be important goals of a high performance building.

  17. Creating high performance buildings: Lower energy, better comfort

    SciTech Connect

    Brager, Gail; Arens, Edward

    2015-03-30

    Buildings play a critical role in the challenge of mitigating and adapting to climate change. It is estimated that buildings contribute 39% of the total U.S. greenhouse gas (GHG) emissions [1] primarily due to their operational energy use, and about 80% of this building energy use is for heating, cooling, ventilating, and lighting. An important premise of this paper is about the connection between energy and comfort. They are inseparable when one talks about high performance buildings. Worldwide data suggests that we are significantly overcooling buildings in the summer, resulting in increased energy use and problems with thermal comfort. In contrast, in naturally ventilated buildings without mechanical cooling, people are comfortable in much warmer temperatures due to shifting expectations and preferences as a result of occupants having a greater degree of personal control over their thermal environment; they have also become more accustomed to variable conditions that closely reflect the natural rhythms of outdoor climate patterns. This has resulted in an adaptive comfort zone that offers significant potential for encouraging naturally ventilated buildings to improve both energy use and comfort. Research on other forms for providing individualized control through low-energy personal comfort systems (desktop fans, foot warmed, and heated and cooled chairs) have also demonstrated enormous potential for improving both energy and comfort performance. Studies have demonstrated high levels of comfort with these systems while ambient temperatures ranged from 64–84°F. Energy and indoor environmental quality are inextricably linked, and must both be important goals of a high performance building.

  18. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  19. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  20. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  1. Small-Scale High-Performance Optics

    SciTech Connect

    WILSON, CHRISTOPHER W.; LEGER, CHRIS L.; SPLETZER, BARRY L.

    2002-06-01

    Historically, high resolution, high slew rate optics have been heavy, bulky, and expensive. Recent advances in MEMS (Micro Electro Mechanical Systems) technology and micro-machining may change this. Specifically, the advent of steerable sub-millimeter sized mirror arrays could provide the breakthrough technology for producing very small-scale high-performance optical systems. For example, an array of steerable MEMS mirrors could be the building blocks for a Fresnel mirror of controllable focal length and direction of view. When coupled with a convex parabolic mirror the steerable array could realize a micro-scale pan, tilt and zoom system that provides full CCD sensor resolution over the desired field of view with no moving parts (other than MEMS elements). This LDRD provided the first steps towards the goal of a new class of small-scale high-performance optics based on MEMS technology. A large-scale, proof of concept system was built to demonstrate the effectiveness of an optical configuration applicable to producing a small-scale (< 1cm) pan and tilt imaging system. This configuration consists of a color CCD imager with a narrow field of view lens, a steerable flat mirror, and a convex parabolic mirror. The steerable flat mirror directs the camera's narrow field of view to small areas of the convex mirror providing much higher pixel density in the region of interest than is possible with a full 360 deg. imaging system. Improved image correction (dewarping) software based on texture mapping images to geometric solids was developed. This approach takes advantage of modern graphics hardware and provides a great deal of flexibility for correcting images from various mirror shapes. An analytical evaluation of blur spot size and axi-symmetric reflector optimization were performed to address depth of focus issues that occurred in the proof of concept system. The resulting equations will provide the tools for developing future system designs.

  2. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  3. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  4. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  5. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  6. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  7. High performance vapour-cell frequency standards

    NASA Astrophysics Data System (ADS)

    Gharavipour, M.; Affolderbach, C.; Kang, S.; Bandi, T.; Gruet, F.; Pellaton, M.; Mileti, G.

    2016-06-01

    We report our investigations on a compact high-performance rubidium (Rb) vapour-cell clock based on microwave-optical double-resonance (DR). These studies are done in both DR continuous-wave (CW) and Ramsey schemes using the same Physics Package (PP), with the same Rb vapour cell and a magnetron-type cavity with only 45 cm3 external volume. In the CW-DR scheme, we demonstrate a DR signal with a contrast of 26% and a linewidth of 334 Hz; in Ramsey-DR mode Ramsey signals with higher contrast up to 35% and a linewidth of 160 Hz have been demonstrated. Short-term stabilities of 1.4×10-13 τ-1/2 and 2.4×10-13 τ-1/2 are measured for CW-DR and Ramsey-DR schemes, respectively. In the Ramsey-DR operation, thanks to the separation of light and microwave interactions in time, the light-shift effect has been suppressed which allows improving the long-term clock stability as compared to CW-DR operation. Implementations in miniature atomic clocks are considered.

  8. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  9. High performance hand-held gas chromatograph

    SciTech Connect

    Yu, C.M.

    1998-04-28

    The Microtechnology Center of Lawrence Livermore National Laboratory has developed a high performance hand-held, real time detection gas chromatograph (HHGC) by Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this hand-held gas chromatograph is about five lbs., with a physical size of 8{close_quotes} x 5{close_quotes} x 3{close_quotes} including carrier gas and battery. It consumes about 12 watts of electrical power with a response time on the order of one to two minutes. This HHGC has an average effective theoretical plate of about 40k. Presently, its sensitivity is limited by its thermal sensitive detector at PPM. Like a conventional G.C., this HHGC consists mainly of three major components: (1) the sample injector, (2) the column, and (3) the detector with related electronics. The present HHGC injector is a modified version of the conventional injector. Its separation column is fabricated completely on silicon wafers by means of MEMS technology. This separation column has a circular cross section with a diameter of 100 pm. The detector developed for this hand-held GC is a thermal conductivity detector fabricated on a silicon nitride window by MEMS technology. A normal Wheatstone bridge is used. The signal is fed into a PC and displayed through LabView software.

  10. High Performance Graphene Oxide Based Rubber Composites

    NASA Astrophysics Data System (ADS)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-08-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications.

  11. A high performance thin film thermoelectric cooler

    SciTech Connect

    Rowe, D.M.; Min, G.; Volklein, F.

    1998-07-01

    Thin film thermoelectric devices with small dimensions have been fabricated using microelectronics technology and operated successfully in the Seebeck mode as sensors or generators. However, they do not operate successfully in the Peltier mode as coolers, because of the thermal bypass provided by the relatively thick substrate upon which the thermoelectric device is fabricated. In this paper a processing sequence is described which dramatically reduces this thermal bypass and facilitates the fabrication of high performance integrated thin film thermoelectric coolers. In the processing sequence a very thin amorphous SiC (or SiO{sub 2}SiN{sub 4}) film is deposited on a silicon substrate using conventional thin film deposition and a membrane formed by removing the silicon substrate over a desired region using chemical etching or micro-machining. Thermoelements are deposited on the membrane using conventional thin film deposition and patterning techniques and configured so that the region which is to be cooled is abutted to the cold junctions of the Peltier thermoelements while the hot junctions are located at the outer peripheral area which rests on the silicon substrate rim. Heat is pumped laterally from the cooled region to the silicon substrate rim and then dissipated vertically through it to an external heat sink. Theoretical calculations of the performance of a cooler described above indicate that a maximum temperature difference of about 40--50K can be achieved with a maximum heat pumping capacity of around 10 milliwatts.

  12. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  13. An integrated high performance Fastbus slave interface

    SciTech Connect

    Christiansen, J.; Ljuslin, C. )

    1993-08-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip.

  14. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  15. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  16. High Performance Graphene Oxide Based Rubber Composites

    PubMed Central

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  17. High performance composites with active stiffness control.

    PubMed

    Tridech, Charnwit; Maples, Henry A; Robinson, Paul; Bismarck, Alexander

    2013-09-25

    High performance carbon fiber reinforced composites with controllable stiffness could revolutionize the use of composite materials in structural applications. Here we describe a structural material, which has a stiffness that can be actively controlled on demand. Such a material could have applications in morphing wings or deployable structures. A carbon fiber reinforced-epoxy composite is described that can undergo an 88% reduction in flexural stiffness at elevated temperatures and fully recover when cooled, with no discernible damage or loss in properties. Once the stiffness has been reduced, the required deformations can be achieved at much lower actuation forces. For this proof-of-concept study a thin polyacrylamide (PAAm) layer was electrocoated onto carbon fibers that were then embedded into an epoxy matrix via resin infusion. Heating the PAAm coating above its glass transition temperature caused it to soften and allowed the fibers to slide within the matrix. To produce the stiffness change the carbon fibers were used as resistance heating elements by passing a current through them. When the PAAm coating had softened, the ability of the interphase to transfer load to the fibers was significantly reduced, greatly lowering the flexural stiffness of the composite. By changing the moisture content in PAAm fiber coating, the temperature at which the PAAm softens and the composites undergo a reduction in stiffness can be tuned. PMID:23978266

  18. Low-Cost High-Performance MRI.

    PubMed

    Sarracanie, Mathieu; LaPierre, Cristen D; Salameh, Najat; Waddington, David E J; Witzel, Thomas; Rosen, Matthew S

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm(3) imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  19. High performance techniques for space mission scheduling

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.

    1994-01-01

    In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.

  20. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  1. The design of high-performance gliders

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Heuermann, V.

    1985-01-01

    A high-performance glider is defined as a glider which has been designed to carry the pilot in a minimum of time a given distance, taking into account conditions which are as conveniently as possible. The present investigation has the objective to show approaches for enhancing the cross-country flight cruising speed, giving attention to the difficulties which the design engineer will have to overcome. The characteristics of the cross-country flight and their relation to the cruising speed are discussed, and a description is provided of mathematical expressions concerning the cruising speed, the sinking speed, and the optimum gliding speed. The effect of aspect ratio and wing loading on the cruising speed is illustrated with the aid of a graph. Trends in glider development are explored, taking into consideration the design of laminar profiles, the reduction of profile-related drag by plain flaps, and the variation of wing loading during the flight. A number of suggestions are made for obtaining gliders with improved performance.

  2. Low-Cost High-Performance MRI

    PubMed Central

    Sarracanie, Mathieu; LaPierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5–3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  3. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  4. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  5. A low-cost on-board vehicle load monitor

    NASA Astrophysics Data System (ADS)

    Lacquet, Beatrys M.; Swart, Pieter L.; Kotzé, Abraham P.

    1996-12-01

    We propose the use of etched optical fibre strain sensors to provide an economical on-board load indicator for minibuses and heavy vehicles. By improving the fabrication process we produced symmetrically etched fibre strain gauges. Manufactured sensors were evaluated experimentally by straining them on a cantilever beam. For strains smaller than 600 microstrain the output of a ten-segment sensor was linear with a typical gauge factor of -57. Bending losses in the fibre sensor became more pronounced for larger strains. This sensor has only two optical components apart from the sensing element. Strain sensors were mounted on the rear axle and on the front torsion bar of a minibus taxi test vehicle. Proper weighting of the outputs of the front and back sensors on the vehicle ensures a monotonic relationship between the sensor output and load. In addition, the reading of the sensor system is virtually independent of the load distribution in the vehicle. Difference-over-sum processing ensures insensitivity to common-mode perturbations such as temperature and source intensity changes.

  6. Medical emergencies on board commercial airlines: is documentation as expected?

    PubMed Central

    2012-01-01

    Introduction The purpose of this study was to perform a descriptive, content-based analysis on the different forms of documentation for in-flight medical emergencies that are currently provided in the emergency medical kits on board commercial airlines. Methods Passenger airlines in the World Airline Directory were contacted between March and May 2011. For each participating airline, sample in-flight medical emergency documentation forms were obtained. All items in the sample documentation forms were subjected to a descriptive analysis and compared to a sample "medical incident report" form published by the International Air Transport Association (IATA). Results A total of 1,318 airlines were contacted. Ten airlines agreed to participate in the study and provided a copy of their documentation forms. A descriptive analysis revealed a total of 199 different items, which were summarized into five sub-categories: non-medical data (63), signs and symptoms (68), diagnosis (26), treatment (22) and outcome (20). Conclusions The data in this study illustrate a large variation in the documentation of in-flight medical emergencies by different airlines. A higher degree of standardization is preferable to increase the data quality in epidemiologic aeromedical research in the future. PMID:22397530

  7. The ALTCRISS Project On Board the International Space Station

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Casolino, M.; Altamura, F.; Minori, M.; Picozza, P.; Fuglesang, C.; Galper, A.; Popov, A.; Benghin, V.; Petrov, V. M.

    2006-01-01

    The Altcriss project aims to perform a long term survey of the radiation environment on board the International Space Station. Measurements are being performed with active and passive devices in different locations and orientations of the Russian segment of the station. The goal is perform a detailed evaluation of the differences in particle fluence and nuclear composition due to different shielding material and attitude of the station. The Sileye-3/Alteino detector is used to identify nuclei up to Iron in the energy range above approximately equal to 60 MeV/n; a number of passive dosimeters (TLDs, CR39) are also placed in the same location of Sileye-3 detector. Polyethylene shielding is periodically interposed in front of the detectors to evaluate the effectiveness of shielding on the nuclear component of the cosmic radiation. The project was submitted to ESA in reply to the AO the Life and Physical Science of 2004 and was begun in December 2005. Dosimeters and data cards are rotated every six months: up to now three launches of dosimeters and data cards have been performed and have been returned with the end expedition 12 and 13.

  8. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  9. SPE(R)-OBOGS: On-board oxygen generating sustem

    NASA Technical Reports Server (NTRS)

    Mcelroy, J.; Smith, W.

    1995-01-01

    Regulations require oxygen usage by commercial airline flight crews during check out and during certain aircraft configurations. This oxygen is drawn from a high pressure onboard pressure cylinder storage system. In a typical aircraft oxygen cylinder removal for oxygen ground servicing is conducted every 4 to 6 weeks. An on board oxygen generating system has been developed to eliminate the need for oxygen ground servicing. The SPE-OBOGS supplies oxygen during flight in a 'trickle charge' mode to replenish the consumed oxygen at pressures up to 1850 psi. The Electrochemical cell stack is the fundamental SPE-OBOGS system component. The same basic proton exchange membrane technology, previously used for the Gemini program fuel cells and currently used in nuclear submarines as oxygen generators, is used in the SPE-OBOGS. An in-serivce evaluation of the SPE-OBOGS is in the planning stage and a zero gravity version is being promoted for on orbit space suit oxygen system recharge. Summary results of the SPE-OBOGS development will be addressed.

  10. Hindering Factors of Beginning Teachers' High Performance in Higher Education Pakistan: Case Study of IUB

    ERIC Educational Resources Information Center

    Sarwar, Shakeel; Aslam, Hassan Danyal; Rasheed, Muhammad Imran

    2012-01-01

    Purpose: The aim of the researchers in this endeavor is to identify the challenges and obstacles faced by beginning teachers in higher education. This study also explores practical implications and what adaptation can be utilized in order to have high performance of the beginning teachers. Design/methodology/approach: Researchers have applied…

  11. High Performance Computing in Solid Earth Sciences

    NASA Astrophysics Data System (ADS)

    Manea, V. C.; Manea, M.; Pomeran, M.; Besutiu, L.; Zlagnean, L.

    2012-04-01

    Presently, the solid earth sciences started to move towards implementing high performance computational (HPC) research facilities. One of the key tenants of HPC is performance, and designing a HPC solution tailored to a specific research field as solid earth that represents an optimum price/performance ratio is often a challenge. The HPC system performance strongly depends on the software-hardware interaction, and therefore prior knowledge on how well specific parallelized software performs on different HPC architectures can weight significantly on choosing the final configuration. In this paper we present benchmark results from two different HPC systems: one low-end HPCC (Horus) with 300 cores and 1.6 TFlops theoretical peak performance, and one high-end HPCC (CyberDyn) with 1344 cores and 11.2 TFlops theoretical peak performance. The software benchmark used in this paper is the open source package CitcomS, which is widely used in the solid earth community (www.geodynamics.org). Testing a CFD code specific for earth sciences, the HPC system Horus based on Gigabit Ethernet performed remarkably well compared with its counterpart Cyeberdyn which is based on Infiniband QDR fabric, but only for a relatively small number of computing cores (96). However, increasing the mesh size and the number of computing cores the HPCC CyberDyn starts outperforming the HPCC Horus because of the low-latency high-speed QDR network dedicated to MPI traffic. Since presently we are moving towards high-resolution simulations for geodynamic predictions that require the same scale as observations, HPC facilities used in earth sciences should benefit from larger up-front investment in future systems that are based on high-speed interconnects.

  12. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  13. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  14. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  15. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  16. Experience with high-performance PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Goldburgh, Mitchell M.; Head, Calvin

    1997-05-01

    Lockheed Martin (Loral) has installed PACS with associated teleradiology in several tens of hospitals. The PACS that have been installed have been the basis for a shift to filmless radiology in many of the hospitals. the basic structure for the PACS and the teleradiology that is being used is outlined. The way that the PACS are being used in the hospitals is instructive. The three most used areas for radiology in the hospital are the wards including the ICU wards, the emergency room, and the orthopedics clinic. The examinations are mostly CR images with 20 percent to 30 percent of the examinations being CT, MR, and ultrasound exams. The PACS are being used to realize improved productivity for radiology and for the clinicians. For radiology the same staff is being used for 30 to 50 percent more workload. For the clinicians 10 to 20 percent of their time is being saved in dealing with radiology images. The improved productivity stems from the high performance of the PACS that has been designed and installed. Images are available on any workstation in the hospital within less than two seconds, even during the busiest hour of the day. The examination management functions to restrict the attention of any one user to the examinations that are of interest. The examination management organizes the workflow through the radiology department and the hospital, improving the service of the radiology department by reducing the time until the information from a radiology examination is available. The remaining weak link in the PACS system is transcription. The examination can be acquired, read, an the report dictated in much less than ten minutes. The transcription of the dictated reports can take from a few hours to a few days. The addition of automatic transcription services will remove this weak link.

  17. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  18. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  19. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  20. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  1. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  2. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  3. Thermal interface pastes nanostructured for high performance

    NASA Astrophysics Data System (ADS)

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  4. Robonaut 2 - Initial Activities On-Board the ISS

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Greene, B. D.; Joyce, Charles; De La Pena, Noe; Noblitt, Alan; Ambrose, Robert

    2011-01-01

    Robonaut 2, or R2, arrived on the International Space Station in February 2011 and is currently undergoing testing in preparation for it to become, initially, an Intra-Vehicular Activity (IVA) tool and then evolve into a system that can perform Extra-Vehicular Activities (EVA). After the completion of a series of system level checks to ensure that the robot traveled well on-board the Space Shuttle Atlantis, ground control personnel will remotely control the robot to perform free space tasks that will help characterize the differences between earth and zero-g control. For approximately one year, the fixed base R2 will perform a variety of experiments using a reconfigurable task board that was launched with the robot. While working side-by-side with human astronauts, Robonaut 2 will actuate switches, use standard tools, and manipulate Space Station interfaces, soft goods and cables. The results of these experiments will demonstrate the wide range of tasks a dexterous humanoid can perform in space and they will help refine the methodologies used to control dexterous robots both in space and here on earth. After the trial period that will evaluate R2 while on a fixed stanchion in the US Laboratory module, NASA plans to launch climbing legs that when attached to the current on-orbit R2 upper body will give the robot the ability to traverse through the Space Station and start assisting crew with general IVA maintenance activities. Multiple control modes will be evaluated in this extra-ordinary ISS test environment to prepare the robot for use during EVAs. Ground Controllers will remotely supervise the robot as it executes semi-autonomous scripts for climbing through the Space Station and interacting with IVA interfaces. IVA crew will locally supervise the robot using the same scripts and also teleoperate the robot to simulate scenarios with the robot working alone or as an assistant during space walks.

  5. Fatigue stress detection of VIRTIS cryocoolers on board Rosetta

    NASA Astrophysics Data System (ADS)

    Giuppi, Stefano; Politi, Romolo; Capria, Maria Teresa; Piccioni, Giuseppe; De Sanctis, Maria Cristina; Erard, Stéphane; Tosi, Federico; Capaccioni, Fabrizio; Filacchione, Gianrico

    Rosetta is a planetary cornerstone mission of the European Space Agency (ESA). It is devoted to the study of minor bodies of our solar system and it will be the first mission ever to land on a comet (the Jupiter-family comet 67P/Churyumov-Gerasimenko). VIRTIS-M is a sophisticated imaging spectrometer that combines two data channels in one compact instrument, respectively for the visible and the infrared range (0.25-5.0 μm). VIRTIS-H is devoted to infrared spectroscopy (2.5-5.0 μm) with high spectral resolution. Since the satellite will be inside the tail of the comet during one of the most important phases of the mission, it would not be appropriate to use a passive cooling system, due to the high flux of contaminants on the radiator. Therefore the IR sensors are cooled by two Stirling cycle cryocoolers produced by RICOR. Since RICOR operated life tests only on ground, it was decided to conduct an analysis on VIRTIS onboard Rosetta telemetries with the purpose of study possible differences in the cryocooler performancies. The analysis led to the conclusion that cryocoolers, when operating on board, are subject to a fatigue stress not present in the on ground life tests. The telemetries analysis shows a cyclic variation in cryocooler rotor angular velocity when -M or -H or both channel are operating (it has been also noted an influence of -M channel operations in -H cryocooler rotor angular velocity and vice versa) with frequencies mostly linked to operational parameters values. The frequencies have been calculated for each mission observation applying the Fast Fourier Transform (FFT). In order to evaluate possible hedge effects it has been also applied the Hanning window to compare the results. For a more complete evaluation of cryocoolers fatigue stress, for each mission observation the angular acceleration and the angular jerk have been calculated.

  6. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 2 2012-10-01 2012-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  7. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  8. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 2 2013-10-01 2013-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  9. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 2 2014-10-01 2014-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  10. 49 CFR 176.182 - Conditions for handling on board ship.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 2 2011-10-01 2011-10-01 false Conditions for handling on board ship. 176.182 Section 176.182 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS... Port § 176.182 Conditions for handling on board ship. (a) Weather conditions. Class 1...

  11. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  12. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  13. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF... TRAVEL Accessibility of Aircraft § 382.65 What are the requirements concerning on-board wheelchairs?...

  14. The Avionics SOIS Services of CORDET On-Board Software Architecture

    NASA Astrophysics Data System (ADS)

    Alana, Elena; del Carmen Lomba, Maria; Jung, Andreas; Grenham, Adrian; Fowell, Stuart

    2013-08-01

    This paper introduces the specification of the Execution Platform Layer of the On-Board Software Reference Architecture (OBSW-RA) presented in COrDeT-2 study. In particular, the paper addresses the avionics services defined within the context of the overall Spacecraft On-board Interface Services (SOIS) available at the Execution Platform Layer of the OBSW-RA.

  15. Application of advanced on-board processing concepts to future satellite communications systems

    NASA Technical Reports Server (NTRS)

    Katz, J. L.; Hoffman, M.; Kota, S. L.; Ruddy, J. M.; White, B. F.

    1979-01-01

    An initial definition of on-board processing requirements for an advanced satellite communications system to service domestic markets in the 1990's is presented. An exemplar system architecture with both RF on-board switching and demodulation/remodulation baseband processing was used to identify important issues related to system implementation, cost, and technology development.

  16. High-Performance, Space-Storable, Bi-Propellant Program Status

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    2002-01-01

    Bipropellant propulsion systems currently represent the largest bus subsystem for many missions. These missions range from low Earth orbit satellite to geosynchronous communications and planetary exploration. The payoff of high performance bipropellant systems is illustrated by the fact that Aerojet Redmond has qualified a commercial NTO/MMH engine based on the high Isp technology recently delivered by this program. They are now qualifying a NTO/hydrazine version of this engine. The advanced rhenium thrust chambers recently provided by this program have raised the performance of earth storable propellants from 315 sec to 328 sec of specific impulse. The recently introduced rhenium technology is the first new technology introduced to satellite propulsion in 30 years. Typically, the lead time required to develop and qualify new chemical thruster technology is not compatible with program development schedules. These technology development programs must be supported by a long term, Base R&T Program, if the technology s to be matured. This technology program then addresses the need for high performance, storable, on-board chemical propulsion for planetary rendezvous and descent/ascent. The primary NASA customer for this technology is Space Science, which identifies this need for such programs as Mars Surface Return, Titan Explorer, Neptune Orbiter, and Europa Lander. High performance (390 sec) chemical propulsion is estimated to add 105% payload to the Mars Sample Return mission or alternatively reduce the launch mass by 33%. In many cases, the use of existing (flight heritage) propellant technology is accommodated by reducing mission objectives and/or increasing enroute travel times sacrificing the science value per unit cost of the program. Therefore, a high performance storable thruster utilizing fluorinated oxidizers with hydrazine is being developed.

  17. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle.

    PubMed

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  18. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  19. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  20. High performance VLSI telemetry data systems

    NASA Technical Reports Server (NTRS)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  1. High performance ultrasonic field simulation on complex geometries

    NASA Astrophysics Data System (ADS)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  2. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  3. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  4. Measuring Organic Matter with COSIMA on Board Rosetta

    NASA Astrophysics Data System (ADS)

    Briois, C.; Baklouti, D.; Bardyn, A.; Cottin, H.; Engrand, C.; Fischer, H.; Fray, N.; Godard, M.; Hilchenbach, M.; von Hoerner, H.; Höfner, H.; Hornung, K.; Kissel, J.; Langevin, Y.; Le Roy, L.; Lehto, H.; Lehto, K.; Orthous-Daunay, F. R.; Revillet, C.; Rynö, J.; Schulz, R.; Silen, J. V.; Siljeström, S.; Thirkell, L.

    2014-12-01

    Comets are believed to contain the most pristine material of our Solar System materials and therefore to be a key to understand the origin of the Solar System, and the origin of life. Remote sensing observations have led to the detection of more than twenty simple organic molecules (Bockelée-Morvan et al., 2004; Mumma and Charnley, 2011). Experiments on-board in-situ exploration missions Giotto and Vega and the recent Stardust sample return missions have shown that a significant fraction of the cometary grains consists of organic matter. Spectra showed that both the gaseous (Mitchell et al., 1992) and the solid phase (grains) (Kissel and Krueger, 1987) contained organic molecules with higher masses than those of the molecules detected by remote sensing techniques in the gaseous phase. Some of the grains analyzed in the atmosphere of comet 1P/Halley seem to be essentially made of a mixture of carbon, hydrogen, oxygen and nitrogen (CHON grains, Fomenkova, 1999). Rosetta is an unparalleled opportunity to make a real breakthrough into the nature of cometary matter, both in the gas and in the solid phase. The dust mass spectrometer COSIMA on Rosetta will analyze organic and inorganic phases in the dust. The organic phases may be refractory, but some organics may evaporate with time from the dust and lead to an extended source in the coma. Over the last years, we have prepared the cometary rendezvous by the analysis of various samples with the reference model of COSIMA. We will report on this calibration data set and on the first results of the in-situ analysis of cometary grains as captured, imaged and analyzed by COSIMA. References : Bockelée-Morvan, D., et al. 2004. (Eds.), Comets II. the University of Arizona Press, Tucson, USA, pp. 391-423 ; Fomenkova, M.N., 1999. Space Science Reviews 90, 109-114 ; Kissel, J., Krueger, F.R., 1987. Nature 326, 755-760 ; Mitchell, et al. 1992. Icarus 98, 125-133 ; Mumma, M.J., Charnley, S.B., 2011. Annual Review of Astronomy and

  5. Legionella on board trains: effectiveness of environmental surveillance and decontamination

    PubMed Central

    2012-01-01

    Background Legionella pneumophila is increasingly recognised as a significant cause of sporadic and epidemic community-acquired and nosocomial pneumonia. Many studies describe the frequency and severity of Legionella spp. contamination in spa pools, natural pools, hotels and ships, but there is no study analysing the environmental monitoring of Legionella on board trains. The aims of the present study were to conduct periodic and precise environmental surveillance of Legionella spp. in water systems and water tanks that supply the toilet systems on trains, to assess the degree of contamination of such structures and to determine the effectiveness of decontamination. Methods A comparative pre-post ecological study was conducted from September 2006 to January 2011. A total of 1,245 water samples were collected from plumbing and toilet water tanks on passenger trains. The prevalence proportion of all positive samples was calculated. The unpaired t-test was performed to evaluate statistically significant differences between the mean load values before and after the decontamination procedures; statistical significance was set at p ≤ 0.05. Results In the pre-decontamination period, 58% of the water samples were positive for Legionella. Only Legionella pneumophila was identified: 55.84% were serogroup 1, 19.03% were serogroups 2–14 and 25.13% contained both serogroups. The mean bacterial load value was 2.14 × 103 CFU/L. During the post-decontamination period, 42.75% of water samples were positive for Legionella spp.; 98.76% were positive for Legionella pneumophila: 74.06% contained serogroup 1, 16.32% contained serogroups 2–14 and 9.62% contained both. The mean bacterial load in the post-decontamination period was 1.72 × 103 CFU/L. According to the t-test, there was a statistically significant decrease in total bacterial load until approximately one and a half year after beginning the decontamination programme (p = 0.0097). Conclusions This

  6. Rotational artifacts in on-board cone beam computed tomography

    NASA Astrophysics Data System (ADS)

    Ali, E. S. M.; Webb, R.; Nyiri, B. J.

    2015-02-01

    Rotational artifacts in image guidance systems lead to registration errors that affect non-isocentric treatments and dose to off-axis organs-at-risk. This study investigates a rotational artifact in the images acquired with the on-board cone beam computed tomography system XVI (Elekta, Stockholm, Sweden). The goals of the study are to identify the cause of the artifact, to characterize its dependence on other quantities, and to investigate possible solutions. A 30 cm diameter cylindrical phantom is used to acquire clockwise and counterclockwise scans at five speeds (120 to 360 deg min-1) on six Elekta linear accelerators from three generations (MLCi, MLCi2 and Agility). Additional scans are acquired with different pulse widths and focal spot sizes for the same mAs. Image quality is evaluated using a common phantom with an in-house three dimensional contrast transfer function attachment. A robust, operator-independent analysis is developed which quantifies rotational artifacts with 0.02° accuracy and imaging system delays with 3 ms accuracy. Results show that the artifact is caused by mislabelling of the projections with a lagging angle due to various imaging system delays. For the most clinically used scan speed (360 deg min-1), the artifact is ˜0.5°, which corresponds to ˜0.25° error per scan direction with the standard Elekta procedure for angle calibration. This leads to a 0.5 mm registration error at 11 cm off-center. The artifact increases linearly with scan speed, indicating that the system delay is independent of scan speed. For the most commonly used pulse width of 40 ms, this delay is 34 ± 1 ms, part of which is half the pulse width. Results are consistent among the three linac generations. A software solution that corrects the angles of individual projections is shown to eliminate the rotational error for all scan speeds and directions. Until such a solution is available from the manufacturer, three clinical solutions are presented, which reduce the

  7. Optical multiple access techniques for on-board routing

    NASA Technical Reports Server (NTRS)

    Mendez, Antonio J.; Park, Eugene; Gagliardi, Robert M.

    1992-01-01

    The purpose of this research contract was to design and analyze an optical multiple access system, based on Code Division Multiple Access (CDMA) techniques, for on board routing applications on a future communication satellite. The optical multiple access system was to effect the functions of a circuit switch under the control of an autonomous network controller and to serve eight (8) concurrent users at a point to point (port to port) data rate of 180 Mb/s. (At the start of this program, the bit error rate requirement (BER) was undefined, so it was treated as a design variable during the contract effort.) CDMA was selected over other multiple access techniques because it lends itself to bursty, asynchronous, concurrent communication and potentially can be implemented with off the shelf, reliable optical transceivers compatible with long term unattended operations. Temporal, temporal/spatial hybrids and single pulse per row (SPR, sometimes termed 'sonar matrices') matrix types of CDMA designs were considered. The design, analysis, and trade offs required by the statement of work selected a temporal/spatial CDMA scheme which has SPR properties as the preferred solution. This selected design can be implemented for feasibility demonstration with off the shelf components (which are identified in the bill of materials of the contract Final Report). The photonic network architecture of the selected design is based on M(8,4,4) matrix codes. The network requires eight multimode laser transmitters with laser pulses of 0.93 ns operating at 180 Mb/s and 9-13 dBm peak power, and 8 PIN diode receivers with sensitivity of -27 dBm for the 0.93 ns pulses. The wavelength is not critical, but 830 nm technology readily meets the requirements. The passive optical components of the photonic network are all multimode and off the shelf. Bit error rate (BER) computations, based on both electronic noise and intercode crosstalk, predict a raw BER of (10 exp -3) when all eight users are

  8. Productive high-performance software for OpenCL devices

    NASA Astrophysics Data System (ADS)

    Melonakos, John M.; Yalamanchili, Pavan; McClanahan, Chris; Arshad, Umar; Landes, Michael; Jamboti, Shivapriya; Joshi, Abhijit; Mohammed, Shehzan; Spafford, Kyle; Venugopalakrishnan, Vishwanath; Malcolm, James

    2013-05-01

    Over the last three decades, CPUs have continued to produce large performance improvements from one generation to the next. However, CPUs have recently hit a performance wall and need parallel computing to move forward. Parallel computing over the next decade will become increasingly defined by heterogeneous computing, involving the use of accelerators in addition to CPUs to get computational tasks done. In order to use an accelerator, software changes must be made. Regular x86-based compilers cannot compile code to run on accelerators without these needed changes. The amount of software change required varies depending upon the availability of and reliance upon software tools that increase performance and productivity. Writing software that leverages the best parallel computing hardware, adapts well to the rapid pace of hardware updates, and minimizes developer muscle is the industry's goal. OpenCL is the standard around which developers are able to achieve parallel performance. OpenCL itself is too difficult to program to receive general adoptions, but productive high-performing software libraries are becoming increasingly popular and capable in delivering lasting value to user applications.

  9. Hardware Efficient and High-Performance Networks for Parallel Computers.

    NASA Astrophysics Data System (ADS)

    Chien, Minze Vincent

    High performance interconnection networks are the key to high utilization and throughput in large-scale parallel processing systems. Since many interconnection problems in parallel processing such as concentration, permutation and broadcast problems can be cast as sorting problems, this dissertation considers the problem of sorting on a new model, called an adaptive sorting network. It presents four adaptive binary sorters the first two of which are ordinary combinational circuits while the last two exploit time-multiplexing and pipelining techniques. These sorter constructions demonstrate that any sequence of n bits can be sorted in O(log^2n) bit-level delay, using O(n) constant fanin gates. This improves the cost complexity of Batcher's binary sorters by a factor of O(log^2n) while matching their sorting time. It is further shown that any sequence of n numbers can be sorted on the same model in O(log^2n) comparator-level delay using O(nlog nloglog n) comparators. The adaptive binary sorter constructions lead to new O(n) bit-level cost concentrators and superconcentrators with O(log^2n) bit-level delay. Their employment in recently constructed permutation and generalized connectors lead to permutation and generalized connection networks with O(nlog n) bit-level cost and O(log^3n) bit-level delay. These results provide the least bit-level cost for such networks with competitive delays. Finally, the dissertation considers a key issue in the implementation of interconnection networks, namely, the pin constraint. Current VLSI technologies can house a large number of switches in a single chip, but the mere fact that one chip cannot have too many pins precludes the possibility of implementing a large connection network on a single chip. The dissertation presents techniques for partitioning connection networks into identical modules of switches in such a way that each module is contained in a single chip with an arbitrarily specified number of pins, and that the cost of

  10. On-Board Computing Subsystem for MIRAX: Architectural and Interface Aspects

    SciTech Connect

    Santiago, Valdivino

    2006-06-09

    This paper presents some proposals of architecture and interfaces among the different types of processing units of MIRAX on-board computing subsystem. MIRAX satellite payload is composed of dedicated computers, two Hard X-Ray cameras and one Soft X-Ray camera (WFC flight spare unit from BeppoSAX satellite). The architectures for the On-Board Computing Subsystem will take into account hardware or software solution of the event preprocessing for CdZnTe detectors. Hardware and software interfaces approaches will be shown and also requirements of on-board memory storage and telemetry will be addressed.

  11. HTML 5 Displays for On-Board Flight Systems

    NASA Technical Reports Server (NTRS)

    Silva, Chandika

    2016-01-01

    During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different

  12. Islet in weightlessness: biological experiments on board COSMOS 1129 satellite

    SciTech Connect

    Zhuk, Y.

    1980-09-01

    Biological experiments planned as an international venture for COSMOS 1129 satellite include tests of: (1) adaptation of rats to conditions of weightlessness, and readaption to Earth's gravity, (2) possibility of fertilization and embryonic development in weightlessness, (3) heat exchange processes, (4) amount of gravity force preferred by fruit flies for laying eggs (given a choice of three centrifugal zones), (5) growth of higher plants from seeds, (6) effects of weightlessness on cells in culture, and (7) radiation danger from heavy nuclei, and electrostatic protection from charged particles.

  13. Islet in weightlessness: Biological experiments on board COSMOS 1129 satellite

    NASA Technical Reports Server (NTRS)

    Zhuk, Y.

    1980-01-01

    Biological experiments planned as an international venture for COSMOS 1129 satellite include tests of: (1) adaptation of rats to conditions of weightlessness, and readaption to Earth's gravity; (2) possibility of fertilization and embryonic development in weightlessness; (3) heat exchange processes; (4) amount of gravity force preferred by fruit flies for laying eggs (given a choice of three centrifugal zones); (5) growth of higher plants from seeds; (6) effects of weightlessness on cells in culture and (7) radiation danger from heavy nuclei, and electrostatic protection from charged particles.

  14. DOE research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  15. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... released from work), the name of the city, town, or village, with State abbreviation, shall be recorded. (2... certifying that the design of the automatic on-board recorder has been sufficiently tested to meet...

  16. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... released from work), the name of the city, town, or village, with State abbreviation, shall be recorded. (2... certifying that the design of the automatic on-board recorder has been sufficiently tested to meet...

  17. On-Board Preventive Maintenance: Analysis of Effectiveness Optimal Duty Period

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalaj, Leon; Hecht, Herbert

    1996-01-01

    To maximize reliability of a spacecraft which performs long-life (over 10-year), deep-space mission (to outer planet), a fault-tolerant environment incorporating automatic on-board preventive maintenance is highly desirable.

  18. 14 CFR 121.393 - Crewmember requirements at stops where passengers remain on board.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR CARRIERS AND OPERATORS FOR COMPENSATION OR HIRE: CERTIFICATION... person is on board, the flight attendants or other qualified persons shall be spaced throughout the...

  19. On-Board Engine Exhaust Particulate Matter Sensor for HCCI and Conventional Diesel Engines

    SciTech Connect

    Hall, Matt; Matthews, Ron

    2011-09-30

    The goal of the research was to refine and complete development of an on-board particulate matter (PM) sensor for diesel, DISI, and HCCI engines, bringing it to a point where it could be commercialized and marketed.

  20. Superconducting magnet and on-board refrigeration system on Japanese MAGLEV vehicle

    SciTech Connect

    Tsuchishima, H.; Herai, T. )

    1991-03-01

    This paper reports on a superconducting magnet and on-board refrigeration system on Japanese MAGLEV vehicles. Running tests on the Miyazaki test track are repeatedly carried out at speeds over 300 km/h using the MAGLEV vehicle, MLU002. The development of the MAGLEV system for the new test line has already started, and a new superconducting magnet for it has been manufactured. An on-board refrigerator is installed in the superconducting magnet to keep the liquid helium temperature without the loss of liquid helium. The helium gas produced when energizing or de-energizing the magnet is stored in on-board gas helium tanks temporarily. The on-board refrigerator is connected directly to the liquid helium tank of the magnet.

  1. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  2. On Board Sensor Network: A Proof of Concept Aiming at Telecom I/O Optimisation

    NASA Astrophysics Data System (ADS)

    Gunes-Lasnet, S.; Furano, G.; Melicher, M.; Gleeson, D.; O'Connor, W.; Vidaud, O.; Notebaert, O.

    2009-05-01

    On-board sensor networks proof of concept is part of a long haul strategy shared between ESA and the European industry. Because point to point interfaces are numerous in a spacecraft, initiatives to standardise them or replace them by bus solutions have been seeked commonly between ESA and the industry. The sensor networks project presented in this paper aims at defining and prototyping a solution for spacecraft on board sensor networks, and to perform a proof of concept with the resulting demonstrator.

  3. Investigations on-board the biosatellite Cosmos-83

    NASA Astrophysics Data System (ADS)

    Gazenko, O. G.; Ilyin, Eu. A.

    The program of the 5day flight of the biosatellite Cosmos-1514 (December 1983) envisaged experimental investigations the purpose of which was to ascertain the effect of short-term microgravity on the physiology, growth and development of various animal and plant species. The study of Rhesus-monkeys has shown that they are an adequate model for exploring the mechanisms of physiological adaptation to weightlessness of the vestibular apparatus and the cardiovascular system. The rat experiment has demonstrated that mammalian embryos, at least during the last term of pregnancy, can develop in microgravity. This finding has been confirmed by fish studies. The experiment on germinating seeds and adult plants has given evidence that microgravity produces no effect on the metabolism of seedlings and on the flowering stage.

  4. [Stomatological studies on board of an "airspace" station].

    PubMed

    Malamuzh, S S; Leont'ev, V K

    2002-01-01

    The status of the oral organs was evaluated in normal subjects exposed for a long (30-262 days) time to conditions simulating an airspace flight in an on-land experimental complex (Sphinx-99 experiment). Stomatological studies were carried out in 15 volunteers aged 28-48 years. The initial (before "flight") and post-flight examination included clinical examination of the maxillofacial area and oral organs, evaluation of the intensity of caries by the CDL index, evaluation of the periodontal tissue status by the PMA and SPITN indexes. Based on the data, we singled out periods of adaptation characteristic of the oral organs under conditions of long enclosure. The results allow prediction of the course of diseases of the oral organs under extreme conditions and recommend methods of purposeful prevention in certain terms. PMID:12380291

  5. [Stability analysis of on-board calibration system of spatially modulated imaging Fourier transform spectrometer].

    PubMed

    Gao, Jingz; Ji, Zhong-Ying; Wang, Zhong-Hou; Cui, Yan

    2010-04-01

    Spatially modulated imaging Fourier transform spectrometer (SMIFTS) was an instrument that depended on interference, and after calibration, the reconstruction spectrum can quantificationally reflect the diffuse reflection of target under sunshine. On-board calibration of SMIFTS confirmed the change of SMIFTS according to relative spectrum calibration, inspected long-time attenuation of SMIFTS optical system, and corrected export data of SMIFTS. According to the requirement of remote sensor application, it must stay in vacuum environment for a long time. As a radiant standard, the stability of lamp-house in long time is the most important characteristic of on-board calibration system. By calculation and experimentation, analyses of on-board calibration of SMIFTS, and testing spaceflight environment characteristic of on-board calibration, the difficulty in the key parts of on-board calibration of SMIFTS such as lamp-house, spectrum filter and integrating sphere was solved. According to the radiation-time stability testing for lamp-house and optics, particle-radiation testing, environmental-mechanics testing and hot vacuum examination, good result was obtained. By whole-aperture and part-field comparative wavelength calibration, the spectrum curve and before-launch interferogram were obtained. After comparison with reconstruction spectrum under different condition, the stability and credibility of SMIFTS on-board calibration system was proved. PMID:20545151

  6. 7 CFR 275.24 - High performance bonuses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 4 2011-01-01 2011-01-01 false High performance bonuses. 275.24 Section 275.24... High performance bonuses. (a) General rule. (1) FNS will award bonuses totaling $48 million for each fiscal year to State agencies that show high or improved performance in accordance with the...

  7. Building Synergy: The Power of High Performance Work Systems.

    ERIC Educational Resources Information Center

    Gephart, Martha A.; Van Buren, Mark E.

    1996-01-01

    Suggests that high-performance work systems create the synergy that lets companies gain and keep a competitive advantage. Identifies the components of high-performance work systems and critical action steps for implementation. Describes the results companies such as Xerox, Lever Brothers, and Corning Incorporated have achieved by using them. (JOW)

  8. From the Editor: The High Performance Computing Act of 1991.

    ERIC Educational Resources Information Center

    McClure, Charles R.

    1992-01-01

    Discusses issues related to the High Performance Computing and Communication program and National Research and Education Network (NREN) established by the High Performance Computing Act of 1991, including program management, specific program development, affecting policy decisions, access to the NREN, the Department of Education role, and…

  9. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  10. Training High Performance Skills: Fallacies and Guidelines. Final Report.

    ERIC Educational Resources Information Center

    Schneider, Walter

    High performance skills are defined as ones: (1) which require over 100 hours of training, (2) in which a substantial number of individuals fail to develop proficiency, and (3) in which the performance of the expert is qualitatively different from that of the novice. Training programs for developing high performance skills are often based on…

  11. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.

  12. An Analysis of a High Performing School District's Culture

    ERIC Educational Resources Information Center

    Corum, Kenneth D.; Schuetz, Todd B.

    2012-01-01

    This report describes a problem based learning project focusing on the cultural elements of a high performing school district. Current literature on school district culture provides numerous cultural elements that are present in high performing school districts. With the current climate in education placing pressure on school districts to perform…

  13. Automatic registration between reference and on-board digital tomosynthesis images for positioning verification.

    PubMed

    Ren, Lei; Godfrey, Devon J; Yan, Hui; Wu, Q Jackie; Yin, Fang-Fang

    2008-02-01

    The authors developed a hybrid multiresolution rigid-body registration technique to automatically register reference digital tomosynthesis (DTS) images with on-board DTS images to guide patient positioning in radiation therapy. This hybrid registration technique uses a faster but less accurate static method to achieve an initial registration, followed by a slower but more accurate adaptive method to fine tune the registration. A multiresolution scheme is employed in the registration to further improve the registration accuracy, robustness, and efficiency. Normalized mutual information is selected as the criterion for the similarity measure and the downhill simplex method is used as the search engine. This technique was tested using image data both from an anthropomorphic chest phantom and from eight head-and-neck cancer patients. The effects of the scan angle and the region-of-interest (ROI) size on the registration accuracy and robustness were investigated. The necessity of using the adaptive registration method in the hybrid technique was validated by comparing the results of the static method and the hybrid method. With a 44 degrees scan angle and a large ROI covering the entire DTS volume, the average of the registration capture ranges in single-axis simulations was between -31 and +34 deg for rotations and between -89 and +78 mm for translations in the phantom study, and between -38 and +38 deg for rotations and between -58 and +65 mm for translations in the patient study. Decreasing the DTS scan angle from 44 degrees to 22 degrees mainly degraded the registration accuracy and robustness for the out-of-plane rotations. Decreasing the ROI size from the entire DTS volume to the volume surrounding the spinal cord reduced the capture ranges to between -23 and +18 deg for rotations and between -33 and +43 mm for translations in the phantom study, and between -18 and +25 deg for rotations and between -35 and +39 mm for translations in the patient study. Results also

  14. Automatic registration between reference and on-board digital tomosynthesis images for positioning verification

    SciTech Connect

    Ren Lei; Godfrey, Devon J.; Yan, Hui; Wu, Q. Jackie; Yin, Fang-Fang

    2008-02-15

    The authors developed a hybrid multiresolution rigid-body registration technique to automatically register reference digital tomosynthesis (DTS) images with on-board DTS images to guide patient positioning in radiation therapy. This hybrid registration technique uses a faster but less accurate static method to achieve an initial registration, followed by a slower but more accurate adaptive method to fine tune the registration. A multiresolution scheme is employed in the registration to further improve the registration accuracy, robustness, and efficiency. Normalized mutual information is selected as the criterion for the similarity measure and the downhill simplex method is used as the search engine. This technique was tested using image data both from an anthropomorphic chest phantom and from eight head-and-neck cancer patients. The effects of the scan angle and the region-of-interest (ROI) size on the registration accuracy and robustness were investigated. The necessity of using the adaptive registration method in the hybrid technique was validated by comparing the results of the static method and the hybrid method. With a 44 deg. scan angle and a large ROI covering the entire DTS volume, the average of the registration capture ranges in single-axis simulations was between -31 and +34 deg. for rotations and between -89 and +78 mm for translations in the phantom study, and between -38 and +38 deg. for rotations and between -58 and +65 mm for translations in the patient study. Decreasing the DTS scan angle from 44 deg. to 22 deg. mainly degraded the registration accuracy and robustness for the out-of-plane rotations. Decreasing the ROI size from the entire DTS volume to the volume surrounding the spinal cord reduced the capture ranges to between -23 and +18 deg. for rotations and between -33 and +43 mm for translations in the phantom study, and between -18 and +25 deg. for rotations and between -35 and +39 mm for translations in the patient study. Results also

  15. Biotechnological experiments in space flights on board of space stations

    NASA Astrophysics Data System (ADS)

    Nechitailo, Galina S.

    2012-07-01

    Space flight conditions are stressful for any plant and cause structural-functional transition due to mobiliation of adaptivity. In space flight experiments with pea tissue, wheat and arabidopsis we found anatomical-morphological transformations and biochemistry of plants. In following experiments, tissue of stevia (Stevia rebaudiana), potato (Solanum tuberosum), callus culture and culture and bulbs of suffron (Crocus sativus), callus culture of ginseng (Panax ginseng) were investigated. Experiments with stevia carried out in special chambers. The duration of experiment was 8-14 days. Board lamp was used for illumination of the plants. After experiment the plants grew in the same chamber and after 50 days the plants were moved into artificial ionexchange soil. The biochemical analysis of plants was done. The total concentration of glycozides and ratio of stevioside and rebauside were found different in space and ground plants. In following generations of stevia after flight the total concentration of stevioside and rebauside remains higher than in ground plants. Experiments with callus culture of suffron carried out in tubes. Duration of space flight experiment was 8-167 days. Board lamp was used for illumination of the plants. We found picrocitina pigment in the space plants but not in ground plants. Tissue culture of ginseng was grown in special container in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 167 days. Biological activity of space flight culutre was in 5 times higher than the ground culture. This difference was observed after recultivation of space flight samples on Earth during year after flight. Callus tissue of potato was grown in tubes in thermostate under stable temperature of 22 ± 0,5 C. Duration of space experiment was from 8 to 14 days. Concentration of regenerates in flight samples was in 5 times higher than in ground samples. The space flight experiments show, that microgravity and other

  16. A Generic Scheduling Simulator for High Performance Parallel Computers

    SciTech Connect

    Yoo, B S; Choi, G S; Jette, M A

    2001-08-01

    It is well known that efficient job scheduling plays a crucial role in achieving high system utilization in large-scale high performance computing environments. A good scheduling algorithm should schedule jobs to achieve high system utilization while satisfying various user demands in an equitable fashion. Designing such a scheduling algorithm is a non-trivial task even in a static environment. In practice, the computing environment and workload are constantly changing. There are several reasons for this. First, the computing platforms constantly evolve as the technology advances. For example, the availability of relatively powerful commodity off-the-shelf (COTS) components at steadily diminishing prices have made it feasible to construct ever larger massively parallel computers in recent years [1, 4]. Second, the workload imposed on the system also changes constantly. The rapidly increasing compute resources have provided many applications developers with the opportunity to radically alter program characteristics and take advantage of these additional resources. New developments in software technology may also trigger changes in user applications. Finally, political climate change may alter user priorities or the mission of the organization. System designers in such dynamic environments must be able to accurately forecast the effect of changes in the hardware, software, and/or policies under consideration. If the environmental changes are significant, one must also reassess scheduling algorithms. Simulation has frequently been relied upon for this analysis, because other methods such as analytical modeling or actual measurements are usually too difficult or costly. A drawback of the simulation approach, however, is that developing a simulator is a time-consuming process. Furthermore, an existing simulator cannot be easily adapted to a new environment. In this research, we attempt to develop a generic job-scheduling simulator, which facilitates the evaluation of

  17. Reduction in redundancy of multichannel telemetric information by the method of adaptive discretization with associative sorting

    NASA Technical Reports Server (NTRS)

    Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.

    1974-01-01

    The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.

  18. The Gamma-Ray Burst On-board Trigger ECLAIRs of SVOM

    NASA Astrophysics Data System (ADS)

    Schanne, Stephane

    2016-07-01

    SVOM, the Space-based multi-band astronomical Variable Objects Monitor, is a French-Chinese satellite mission for Gamma-Ray Burst studies. The conclusion of the Phase B studies is scheduled in 2016 and the launch is foreseen in 2021. With its set of 4 on-board instruments as well as dedicated ground instruments, SVOM will study GBRs in great detail, including their temporal and spectral properties from visible to gamma-rays. The coded-mask telescope ECLAIRs on-board SVOM with its Burst On-board Trigger system analyzes in real-time a 2 sr portion of the sky in the 4-120 keV energy range to detect and localize the GRBs. It then requests the spacecraft slew to allow GRB follow-up observations by the on-board narrow field-of-view telescopes MXT in X-rays and VT in the visible, and informs the community of observers via a dedicated ground network. This paper gives an update on the status of ECLAIRs and its Burst On-board Trigger system.

  19. High performance hydrogen-terminated diamond field effect transistors

    NASA Astrophysics Data System (ADS)

    Russell, Stephen A. O.

    Diamond provides extreme properties which make it suitable as a new substrate material for high performance electronics. It has the potential to provide both high frequency and high power performance while operating in extreme environments such as elevated temperature or exposed to corrosive chemicals or radiation. Research to date has shown the potential of diamond for this purpose with hydrogen-terminated diamond surface channel transistors already showing promise in terms of high frequency operation. The inherent instability of using atmospheric molecules to induce a p-type doping at this hydrogen-terminated diamond surface has so far limited power performance and robustness of operation. This work reports upon the scaling of surface channel hydrogen-terminated transistors with FET gate lengths of 250 nm and 120 nm showing performance comparable to other devices published to date. The gate length was then scaled for the first time to sub-100 nm dimensions with a 50 nm gate length FET fabricated giving record high-frequency performance with a fT of 53 GHz. An adapted fabrication procedure was developed for this project with special attention paid to the volatility of the particles upon the diamond surface. Equivalent RF circuit models were extracted for each gate length and analysed in detail. Work was then undertaken to investigate a more stable alternative to the atmospheric induced doping effect with alternative electron accepting materials being deposited upon the hydrogen-terminated diamond surface. The as yet untested organic material F16CuPc was deposited on to hydrogen-terminated diamond and demonstrated its ability to encapsulate and preserve the atmospheric induced sub-surface conductivity at room temperature. For the first time an inorganic material was also investigated as a potential encapsulation for the hydrogen-terminated diamond surface, MoO3 was chosen due to its high electron affinity and like F16CuPc also showed the ability to preserve and

  20. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  1. A technique for on-board CT reconstruction using both kilovoltage and megavoltage beam projections for 3D treatment verification

    SciTech Connect

    Yin Fangfang; Guan Huaiqun; Lu Wenkai

    2005-09-15

    The technologies with kilovoltage (kV) and megavoltage (MV) imaging in the treatment room are now available for image-guided radiation therapy to improve patient setup and target localization accuracy. However, development of strategies to efficiently and effectively implement these technologies for patient treatment remains challenging. This study proposed an aggregated technique for on-board CT reconstruction using combination of kV and MV beam projections to improve the data acquisition efficiency and image quality. These projections were acquired in the treatment room at the patient treatment position with a new kV imaging device installed on the accelerator gantry, orthogonal to the existing MV portal imaging device. The projection images for a head phantom and a contrast phantom were acquired using both the On-Board Imager{sup TM} kV imaging device and the MV portal imager mounted orthogonally on the gantry of a Varian Clinac{sup TM} 21EX linear accelerator. MV projections were converted into kV information prior to the aggregated CT reconstruction. The multilevel scheme algebraic-reconstruction technique was used to reconstruct CT images involving either full, truncated, or a combination of both full and truncated projections. An adaptive reconstruction method was also applied, based on the limited numbers of kV projections and truncated MV projections, to enhance the anatomical information around the treatment volume and to minimize the radiation dose. The effects of the total number of projections, the combination of kV and MV projections, and the beam truncation of MV projections on the details of reconstructed kV/MV CT images were also investigated.

  2. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Rotor dynamic instability problems in high performance turbomachinery are reviewed. Mechanical instability mechanisms are discussed. Seal forces and working fluid forces in turbomachinery are discussed. Control of rotor instability is also investigated.

  3. Mastering the Challenge of High-Performance Computing.

    ERIC Educational Resources Information Center

    Roach, Ronald

    2003-01-01

    Discusses how, just as all of higher education got serious with wiring individual campuses for the Internet, the nation's leading research institutions have initiated "high-performance computing." Describes several such initiatives involving historically black colleges and universities. (EV)

  4. Tiny biomedical amplifier combines high performance, low power drain

    NASA Technical Reports Server (NTRS)

    Deboo, G. J.

    1965-01-01

    Transistorized, portable, high performance amplifier with low power drain facilitates biomedical studies on mobile subjects. This device, which utilizes a differential input to obtain a common-mode rejection, is used for amplifying electrocardiogram and electromyogram signals.

  5. Exploring KM Features of High-Performance Companies

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Wen

    2007-12-01

    For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.

  6. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    SciTech Connect

    2004-11-01

    Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

  7. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean  ±  SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.

  8. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation.

    PubMed

    Stewart, James M P; Ansell, Steve; Lindsay, Patricia E; Jaffray, David A

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from [Formula: see text] mm (mean  ±  SD) to [Formula: see text] mm for an isotropic imaging voxel size of 0.1 mm. PMID:26540304

  9. Towards a smart Holter system with high performance analogue front-end and enhanced digital processing.

    PubMed

    Du, Leilei; Yan, Yan; Wu, Wenxian; Mei, Qiujun; Luo, Yu; Li, Yang; Wang, Lei

    2013-01-01

    Multiple-lead dynamic ECG recorders (Holter) play an important role in the earlier detection of various cardiovascular diseases. In this paper, we present the first several steps towards a 12-lead Holter system with high-performance AFE (Analogue Front-End) and enhanced digital processing. The system incorporates an analogue front-end chip (ADS1298 from TI), which has not yet been widely used in most commercial Holter products. A highly-efficient data management module was designated to handle the data exchange between the ADS1298 and the microprocessor (STM32L151 from ST electronics). Furthermore, the system employs a Field Programmable Gate Array (Spartan-3E from Xilinx) module, on which a dedicated real-time 227-step FIR filter was executed to improve the overall filtering performance, since the ADS1298 has no high-pass filtering capability and only allows limited low-pass filtering. The Spartan-3E FPGA is also capable of offering further on-board computational ability for a smarter Holter. The results indicate that all functional blocks work as intended. In the future, we will conduct clinical trials and compare our system with other state-of-the-arts. PMID:24109911

  10. Variational formulation of high performance finite elements: Parametrized variational principles

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmello

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.

  11. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  12. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  13. On-board Attitude Determination System (OADS). [for advanced spacecraft missions

    NASA Technical Reports Server (NTRS)

    Carney, P.; Milillo, M.; Tate, V.; Wilson, J.; Yong, K.

    1978-01-01

    The requirements, capabilities and system design for an on-board attitude determination system (OADS) to be flown on advanced spacecraft missions were determined. Based upon the OADS requirements and system performance evaluation, a preliminary on-board attitude determination system is proposed. The proposed OADS system consists of one NASA Standard IRU (DRIRU-2) as the primary attitude determination sensor, two improved NASA Standard star tracker (SST) for periodic update of attitude information, a GPS receiver to provide on-board space vehicle position and velocity vector information, and a multiple microcomputer system for data processing and attitude determination functions. The functional block diagram of the proposed OADS system is shown. The computational requirements are evaluated based upon this proposed OADS system.

  14. Evaluation of the use of on-board spacecraft energy storage for electric propulsion missions

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Palmer, F. M.

    1983-01-01

    On-board spacecraft energy storage represents an under utilized resource for some types of missions that also benefit from using relatively high specific impulse capability of electric propulsion. This resource can provide an appreciable fraction of the power required for operating the electric propulsion subsystem in some missions. The most probable mission requirement for utilization of this energy is that of geostationary satellites which have secondary batteries for operating at high power levels during eclipse. The study summarized in this report selected four examples of missions that could benefit from use of electric propulsion and on-board energy storage. Engineering analyses were performed to evaluate the mass saved and economic benefit expected when electric propulsion and on-board batteries perform some propulsion maneuvers that would conventionally be provided by chemical propulsion. For a given payload mass in geosynchronous orbit, use of electric propulsion in this manner typically provides a 10% reduction in spacecraft mass.

  15. Spacecraft on-board SAR image generation for EOS-type missions

    NASA Technical Reports Server (NTRS)

    Liu, K. Y.; Arens, W. E.; Assal, H. M.; Vesecky, J. F.

    1987-01-01

    Spacecraft on-board synthetic aperture radar (SAR) image generation is an extremely difficult problem because of the requirements for high computational rates (usually on the order of Giga-operations per second), high reliability (some missions last up to 10 years), and low power dissipation and mass (typically less than 500 watts and 100 Kilograms). Recently, a JPL study was performed to assess the feasibility of on-board SAR image generation for EOS-type missions. This paper summarizes the results of that study. Specifically, it proposes a processor architecture using a VLSI time-domain parallel array for azimuth correlation. Using available space qualifiable technology to implement the proposed architecture, an on-board SAR processor having acceptable power and mass characteristics appears feasible for EOS-type applications.

  16. Safety in earth orbit study. Volume 2: Analysis of hazardous payloads, docking, on-board survivability

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Detailed and supporting analyses are presented of the hazardous payloads, docking, and on-board survivability aspects connected with earth orbital operations of the space shuttle program. The hazards resulting from delivery, deployment, and retrieval of hazardous payloads, and from handling and transport of cargo between orbiter, sortie modules, and space station are identified and analyzed. The safety aspects of shuttle orbiter to modular space station docking includes docking for assembly of space station, normal resupply docking, and emergency docking. Personnel traffic patterns, escape routes, and on-board survivability are analyzed for orbiter with crew and passenger, sortie modules, and modular space station, under normal, emergency, and EVA and IVA operations.

  17. Real-Time On-Board HMS/Inspection Capability for Propulsion and Power Systems

    NASA Technical Reports Server (NTRS)

    Barkhoudarian, Sarkis

    2005-01-01

    Presently, the evaluation of the health of space propulsion systems includes obtaining and analyzing limited flight data and extensive post flight performance, operational and inspection data. This approach is not practical for deep-space missions due to longer operational times, lack of in-space inspection facility, absence of timely ground commands and very long repair intervals. This paper identifies the on-board health- management/inspection needs of deep-space propulsion and thermodynamic power-conversion systems. It also describes technologies that could provide on-board inspection and more comprehensive health management for more successful missions.

  18. Conceptual design of an on-board optical processor with components

    NASA Technical Reports Server (NTRS)

    Walsh, J. R.; Shackelford, R. G.

    1977-01-01

    The specification of components for a spacecraft on-board optical processor was investigated. A space oriented application of optical data processing and the investigation of certain aspects of optical correlators were examined. The investigation confirmed that real-time optical processing has made significant advances over the past few years, but that there are still critical components which will require further development for use in an on-board optical processor. The devices evaluated were the coherent light valve, the readout optical modulator, the liquid crystal modulator, and the image forming light modulator.

  19. The Solar Spectral Irradiance Measured on Board the International Space Station and the Picard Spacecraft

    NASA Astrophysics Data System (ADS)

    Thuillier, G. O.; Bolsee, D.; Schmidtke, G.; Schmutz, W. K.

    2011-12-01

    On board the International Space Station, the spectrometers SOL-ACES and SOLSPEC measure the solar spectrum irradiance from 17 to 150 nm and 170 to 2900 nm, respectively. On board PICARD launched on 15 June 2010, the PREMOS instrument consists in a radiometer and several sunphotometers operated at several fixed wavelengths. We shall present spectra at different solar activity levels as well as their quoted accuracy. Comparison with similar data from other missions presently running in space will be shown incorporating the PREMOS measurements. Some special solar events will be also presented and interpreted.

  20. High Performance Schools Best Practices Manual. Volume I: Planning [and] Volume II: Design [and] Volume III: Criteria.

    ERIC Educational Resources Information Center

    Eley, Charles, Ed.

    This three-volume manual, focusing on California's K-12 public schools, presents guidelines for establishing schools that are healthy, comfortable, energy efficient, resource efficient, water efficient, secure, adaptable, and easy to operate and maintain. The first volume describes why high performance schools are important, what components are…

  1. The Type of Culture at a High Performance Schools and Low Performance School in the State of Kedah

    ERIC Educational Resources Information Center

    Daud, Yaakob; Raman, Arumugam; Don, Yahya; O. F., Mohd Sofian; Hussin, Fauzi

    2015-01-01

    This research aims to identify the type of culture at a High Performance School (HPS) and Low Performance School (LPS) in the state of Kedah. The research instrument used to measure the type of organizational culture was adapted from Organizational Culture Assessment Instrument (Cameron & Quinn, 2006) based on Competing Values Framework Quinn…

  2. Adaptive beamforming in a CDMA mobile satellite communications system

    NASA Technical Reports Server (NTRS)

    Munoz-Garcia, Samuel G.

    1993-01-01

    Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.

  3. Application of the Payload Data Processing and Storage System to MOSREM Multi-Processor On-Board System for Robotic Exploration Missions

    NASA Astrophysics Data System (ADS)

    Jameux, D.

    (TOS-MMA) section of the European Space Agency. Spacecraft autonomy requires massive processing power that is not available in nowadays space-qualified processors, nor will be for some time. On the other INTRODUCTION hand, leading civil/military processors and processing modules, though providing the numerical power needed To fulfil the growing needs of processing high for autonomy, cannot work reliably in the harsh space quantities of data on-board satellites, payload engineers environment. have come to a common approach. They design sets of data acquisition, storing and processing elements for on- ESA has defined in co-operation with Industry a board applications that form what ESA calls a Payload reference architecture for payload processing that Data Processing and Storage System (PDPSS). decomposes the global payload data processing system in nodes interconnected by high-speed serial links (SpaceWire). This architecture may easily integrate heterogeneous processors, thus providing the ideal framework for combining the environmental ruggedness of space processors (e.g. ERC32) with the shear computing power of industrial processors. This paper presents ESA's near future efforts for the development of an environmentally sound and high performance on-board computer system, composed of COTS and space-qualified hardware, in which the desired characteristics (high computing power and high Figure

  4. Resource Estimation in High Performance Medical Image Computing

    PubMed Central

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D.M.

    2015-01-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of ‘jobs’ requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  5. High-performance positive paste for lead-acid batteries

    SciTech Connect

    Kao, W.H.

    1996-09-01

    Positive lead-acid plates with high porosity and surface area, aiming to deliver a very high current density, about 1 A/cm{sup 2}, were developed. The high porosity and surface area were achieved by using a combination of fine particles of lead oxide and/or basic lead sulfates with an adequate amount of persulfate and water. The relationship between the positive paste phase composition and the high rate performance of the plate was studied. The highly porous plate is able to deliver a very high current owing to more acid being available in the plate structure. In the low rate applications when acid diffusion from the bulk becomes the limiting factor, the high-performance plate is not more advantageous than the conventional starting lighting, and ignition (SLI) plates. The cycle life of the high-performance plate is sensitive to depth of discharge. The deep discharge high rate capacity of the high-performance plates falls faster than that of the SLI plate. Nevertheless, the high-performance paste delivers at least 30% more energy, either to the same depth of discharge per cycle or for the entire service life with constant capacity removal in each cycle. One failure mode of the high-performance plates is the change of material morphology during deep discharge cycling, which results in material shedding.

  6. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  7. BRESEX: On board supervision, basic architecture and preliminary aspects for payload and space shuttle interface

    NASA Technical Reports Server (NTRS)

    Bergamini, E. W.; Depaula, A. R., Jr.; Martins, R. C. D. O.

    1984-01-01

    Data relative to the on board supervision subsystem are presented which were considered in a conference between INPE and NASA personnel, with the purpose of initiating a joint effort leading to the implementation of the Brazilian remote sensing experiment - (BRESEX). The BRESEX should consist, basically, of a multispectral camera for Earth observation, to be tested in a future space shuttle flight.

  8. 49 CFR 176.78 - Use of power-operated industrial trucks on board vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... in 46 CFR 147.60 or 46 CFR 147.45, respectively. ... 49 Transportation 2 2012-10-01 2012-10-01 false Use of power-operated industrial trucks on board... CARRIAGE BY VESSEL General Handling and Stowage § 176.78 Use of power-operated industrial trucks on...

  9. 76 FR 13121 - Electronic On-Board Recorders and Hours of Service Supporting Documents

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    ... requested that FMCSA extend the comment period for the Electronic On-Board Recorder and Hours of Service Supporting Documents Notice of Proposed Rulemaking, which published on February 1, 2011 (76 FR 5537), by 45... Federal Motor Carrier Safety Administration 49 CFR Parts 385, 390, and 395 RIN 2126-AB20 Electronic...

  10. Astronauts Schirra and Stafford talk to crewmen on board the U.S.S. Wasp

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Astronauts Walter M. Schirra Jr. (left), command pilot, and Thomas P. Stafford, pilot, talk to crewmen on board the aircraft carrier U.S.S. Wasp after successful recovery of the Gemini 6 spacecraft. Note the cake with a model of the Gemini spacecraft in its center, which is positioned in front of the astronauts.

  11. 49 CFR 1133.2 - Statement of claimed damages based on Board findings.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 8 2014-10-01 2014-10-01 false Statement of claimed damages based on Board findings. 1133.2 Section 1133.2 Transportation Other Regulations Relating to Transportation (Continued) SURFACE TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE RECOVERY OF DAMAGES § 1133.2 Statement of claimed damages based on...

  12. Marine Technician's Handbook, Instructions for Taking Air Samples on Board Ship: Carbon Dioxide Project.

    ERIC Educational Resources Information Center

    Keeling, Charles D.

    This booklet is one of a series intended to provide explicit instructions for the collection of oceanographic data and samples at sea. The methods and procedures described have been used by the Scripps Institution of Oceanography and found reliable and up-to-date. Instructions are given for taking air samples on board ship to determine the…

  13. Incorporate design of on-board network and inter-satellite network

    NASA Astrophysics Data System (ADS)

    Li, Bin; You, Zheng; Zhang, Chenguang

    2005-11-01

    In satellite, Data transferring is very important and must be reliable. This paper first introduced an on-board network based on Control Area Network (CAN). As a kind of field bus, CAN is simple and reliable, and has been tested by previous flights. In this paper, the CAN frame is redefined, including the identifier and message data, the addresses for source and destination as well as the frame types. On-board network provides datagram transmission and buffer transmission. Data gram transmission is used to carry out TTC functions, and buffer transmission is used to transfer mass data such as images. Inter-satellite network for satellite formation flying is not designed individually. It takes the advantage of TCP/IP model and inherits and extends on-board network protocols. The inter-satellite network includes a linkage layer, a network layer and a transport layer. There are 8 virtual channels for various space missions or requirements and 4 kinds of services to be selected. The network layer is designed to manage the whole net, calculate and select the route table and gather the network information, while the transport layer mainly routes data, which correspondingly makes it possible for communication between each two nodes. Structures of the linkage frame and transport layer data segment are similar, thus there is no complex packing and unpacking. At last, this paper gives the methods for data conversion between the on-board network and the inter-satellite network.

  14. Individual or Group Representation: Native Trustees on Boards of Education in Ontario.

    ERIC Educational Resources Information Center

    Brady, Patrick

    1992-01-01

    Over half of Canadian Native elementary and secondary students attend public schools. In Ontario, legislative restrictions and the mechanics of the electoral system work against meaningful Native representation on boards of education. Legislative precedents for the provision of group representation for Native peoples are discussed. (Author/SV)

  15. Development of On-Board Fluid Analysis for the Mining Industry - Final report

    SciTech Connect

    Pardini, Allan F.

    2005-08-16

    Pacific Northwest National Laboratory (PNNL: Operated by Battelle Memorial Institute for the Department of Energy) is working with the Department of Energy (DOE) to develop technology for the US mining industry. PNNL was awarded a three-year program to develop automated on-board/in-line or on-site oil analysis for the mining industry.

  16. 77 FR 54651 - Study on the Use of Cell Phones On Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-05

    ... Federal Aviation Administration Study on the Use of Cell Phones On Board Aircraft AGENCY: Federal Aviation... on the impact of the use of cell phones for voice communications in an aircraft during a flight in... November 5, 2012. ADDRESSES: Send comments identified as Cell Phone Study Comments using any of...

  17. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  18. On-Board Cryosphere Change Detection With The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Chien, S.; Davies, A.; Tran, D.; Mazzoni, D.; Baker, V.; Dohm, J.; Ip, F.

    2006-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1 with the Hyperion hyper-spectral visible to short-wave infrared spectrometer. ASE science activities include autonomous monitoring of cryospheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. A cryosphere classification algorithm, developed with Support Vector Machine (SVM) machine learning techniques [1], replacing a manually derived classifier used in earlier operations [2], has been used in conjunction with on-board autonomous software application to execute over three hundred on-board scenarios in 2005 and early 2006, to detect and autonomously respond to sea ice break-up and formation, lake freeze and thaw, as well as the onset and melting of snow cover on land. This demonstrates an approach which could be applied to the monitoring of cryospheres on Earth and Mars as well as the search for dynamic activity on the icy moons of the outer Solar System. [1] Castano et al. (2006) Onboard classifiers for science event detection on a remote-sensing spacecraft, KDD '06, Aug 20-23 2006, Philadelphia, PA. [2] Doggett et al. (2006), Autonomous detection of cryospheric change with Hyperion on-board Earth Observing-1, Rem. Sens. Env., 101, 447-462.

  19. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... must conduct an individual safety briefing for any passenger where required by 14 CFR 121.571(a)(3) and (a)(4), 14 CFR 135.117(b), or other FAA requirements. (b) You may offer an individual briefing to any... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false What requirements apply to on-board...

  20. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... must conduct an individual safety briefing for any passenger where required by 14 CFR 121.571(a)(3) and (a)(4), 14 CFR 135.117(b), or other FAA requirements. (b) You may offer an individual briefing to any... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false What requirements apply to on-board...

  1. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... must conduct an individual safety briefing for any passenger where required by 14 CFR 121.571(a)(3) and (a)(4), 14 CFR 135.117(b), or other FAA requirements. (b) You may offer an individual briefing to any... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false What requirements apply to on-board...

  2. 14 CFR 382.115 - What requirements apply to on-board safety briefings?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false What requirements apply to on-board safety briefings? 382.115 Section 382.115 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) SPECIAL REGULATIONS NONDISCRIMINATION ON THE BASIS OF DISABILITY IN AIR TRAVEL Services on Aircraft § 382.115...

  3. 14 CFR 382.65 - What are the requirements concerning on-board wheelchairs?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false What are the requirements concerning on-board wheelchairs? 382.65 Section 382.65 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) SPECIAL REGULATIONS NONDISCRIMINATION ON THE BASIS OF DISABILITY IN AIR TRAVEL Accessibility of Aircraft §...

  4. An on-board pedestrian detection and warning system with features of side pedestrian

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Wong, ChupChung; Chan, KwokPo; Xu, Jiayao; Wang, Xin'an

    2012-01-01

    Automotive Active Safety(AAS) is the main branch of intelligence automobile study and pedestrian detection is the key problem of AAS, because it is related with the casualties of most vehicle accidents. For on-board pedestrian detection algorithms, the main problem is to balance efficiency and accuracy to make the on-board system available in real scenes, so an on-board pedestrian detection and warning system with the algorithm considered the features of side pedestrian is proposed. The system includes two modules, pedestrian detecting and warning module. Haar feature and a cascade of stage classifiers trained by Adaboost are first applied, and then HOG feature and SVM classifier are used to refine false positives. To make these time-consuming algorithms available in real-time use, a divide-window method together with operator context scanning(OCS) method are applied to increase efficiency. To merge the velocity information of the automotive, the distance of the detected pedestrian is also obtained, so the system could judge if there is a potential danger for the pedestrian in the front. With a new dataset captured in urban environment with side pedestrians on zebra, the embedded system and its algorithm perform an on-board available result on side pedestrian detection.

  5. On-Board File Management and Its Application in Flight Operations

    NASA Technical Reports Server (NTRS)

    Kuo, N.

    1998-01-01

    In this paper, the author presents the minimum functions required for an on-board file management system. We explore file manipulation processes and demonstrate how the file transfer along with the file management system will be utilized to support flight operations and data delivery.

  6. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; (9) Main office address; (10) 24-hour period starting time (e.g., midnight, 9:00 a.m., noon, 3:00 p.m... and other related information for the duration of the current trip. (h) Submission of driver's record... on-board system sensor failures and identification of edited data. Such support systems should...

  7. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., 9:00 a.m., noon, 3:00 p.m.) (11) Name of co-driver; (12) Total hours; and (13) Shipping document... the driver's duty status and other related information for the duration of the current trip. (h... also provide information concerning on-board system sensor failures and identification of edited...

  8. 49 CFR 395.15 - Automatic on-board recording devices.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...; (9) Main office address; (10) 24-hour period starting time (e.g., midnight, 9:00 a.m., noon, 3:00 p.m... and other related information for the duration of the current trip. (h) Submission of driver's record... on-board system sensor failures and identification of edited data. Such support systems should...

  9. AGB Statement on Board Responsibility for the Oversight of Educational Quality

    ERIC Educational Resources Information Center

    Association of Governing Boards of Universities and Colleges, 2011

    2011-01-01

    This "Statement on Board Responsibility for the Oversight of Educational Quality," approved by the Board of Directors of the Association of Governing Boards (AGB) in March 2011, urges institutional administrators and governing boards to engage fully in this area of board responsibility. The seven principles in this statement offer suggestions to…

  10. Re-scheduling as a tool for the power management on board a spacecraft

    NASA Technical Reports Server (NTRS)

    Albasheer, Omar; Momoh, James A.

    1995-01-01

    The scheduling of events on board a spacecraft is based on forecast energy levels. The real time values of energy may not coincide with the forecast values; consequently, a dynamic revising to the allocation of power is needed. The re-scheduling is also needed for other reasons on board a spacecraft like the addition of new event which must be scheduled, or a failure of an event due to many different contingencies. This need of rescheduling is very important to the survivability of the spacecraft. In this presentation, a re-scheduling tool will be presented as a part of an overall scheme for the power management on board a spacecraft from the allocation of energy point of view. The overall scheme is based on the optimal use of energy available on board a spacecraft using expert systems combined with linear optimization techniques. The system will be able to schedule maximum number of events utilizing most energy available. The outcome is more events scheduled to share the operation cost of that spacecraft. The system will also be able to re-schedule in case of a contingency with minimal time and minimal disturbance of the original schedule. The end product is a fully integrated planning system capable of producing the right decisions in short time with less human error. The overall system will be presented with the re-scheduling algorithm discussed in detail, then the tests and results will be presented for validations.

  11. 49 CFR Appendix A to Part 395 - Electronic On-Board Recorder Performance Specifications

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Electronic On-Board Recorder Performance Specifications A Appendix A to Part 395 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS HOURS OF SERVICE OF DRIVERS...

  12. Evaluation of the on-board module building cotton harvest systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The "on-board" module building systems from Case IH (Module Express 625 [ME 625]) and a system under final testing by John Deere (7760) represent the most radical change in the seed cotton handling and harvest system since the module builder was introduced over 30 years ago. The Module Express 625 c...

  13. 75 FR 739 - Use of Additional Portable Oxygen Concentrator Devices on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-06

    ..., ``Use of Certain Portable Oxygen Concentrator Devices on Board Aircraft'' (70 FR 40156). SFAR 106 is the result of a notice the FAA published in July 2004 (69 FR 42324) to address the needs of passengers who... Inogen, Inc.'s Inogen One POCs. SFAR 106 was amended on September 12, 2006, (71 FR 53954) to add...

  14. 77 FR 63217 - Use of Additional Portable Oxygen Concentrators on Board Aircraft

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-16

    ...) entitled, ``Use of Certain Portable Oxygen Concentrator Devices Onboard Aircraft'' (70 FR 40156). SFAR 106 is the result of a notice the FAA published in July 2004 (69 FR 42324) to address the needs of... on and used by a passenger on board an aircraft. In addition, on January 27, 2012 (77 FR 4219),...

  15. Concept for On-Board Safe Landing Target Selection and Landing for the Mars 2020 Mission

    NASA Astrophysics Data System (ADS)

    Brugarolas, P.; Chen, A.; Johnson, A.; Casoliva, J.; Singh, G.; Stehura, A.; Way, D.; Dutta, S.

    2014-06-01

    We present a concept for a potential enhancement to Mars 2020 to enable landing on hazardous landing sites. It adds to MSL-EDL the capability to select and divert to a safe site through on-board terrain relative localization and target selection.

  16. 29 CFR 1915.506 - Hazards of fixed extinguishing systems on board vessels and vessel sections.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Hazards of fixed extinguishing systems on board vessels and vessel sections. 1915.506 Section 1915.506 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) OCCUPATIONAL SAFETY AND HEALTH STANDARDS FOR SHIPYARD EMPLOYMENT...

  17. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  18. The Process Guidelines for High-Performance Buildings

    SciTech Connect

    Grondzik, W.

    1999-07-01

    The Process Guidelines for High-Performance Buildings are a set of recommendations for the design and operation of efficient and effective commercial/institutional buildings. The Process Guidelines have been developed in a searchable database format and are intended to replace print documents that provide guidance for new building designs for the State of Florida and for the operation of existing State buildings. The Process Guidelines for High-Performance buildings reside on the World Wide Web and are publicly accessible. Contents may be accessed in a variety of ways to best suit the needs of the user. The Process Guidelines address the interests of a range of facilities professionals; are organized around the primary phases of building design, construction, and operation; and include content dealing with all major building systems. The Process Guidelines for High-Performance Buildings may be accessed through the ``Resources'' area of the edesign Web site: http://fcn.state.fl.us/fdi/edesign/resource/index.html.

  19. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  20. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  1. DScan - a high-performance digital scanning system for entomological collections.

    PubMed

    Schmidt, Stefan; Balke, Michael; Lafogler, Stefan

    2012-01-01

    Here we describe a high-performance imaging system for creating high-resolution images of whole insect drawers. All components of the system are industrial standard and can be adapted to meet the specific needs of entomological collections. A controlling unit allows the setting of imaging area (drawer size), step distance between individual images, number of images, image resolution, and shooting sequence order through a set of parameters. The system is highly configurable and can be used with a wide range of different optical hardware and image processing software. PMID:22859887

  2. Design of a new high-performance pointing controller for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Johnson, C. D.

    1993-01-01

    A new form of high-performance, disturbance-adaptive pointing controller for the Hubble Space Telescope (HST) is proposed. This new controller is all linear (constant gains) and can maintain accurate 'pointing' of the HST in the face of persistent randomly triggered uncertain, unmeasurable 'flapping' motions of the large attached solar array panels. Similar disturbances associated with antennas and other flexible appendages can also be accommodated. The effectiveness and practicality of the proposed new controller is demonstrated by a detailed design and simulation testing of one such controller for a planar-motion, fully nonlinear model of HST. The simulation results show a high degree of disturbance isolation and pointing stability.

  3. High performance computing and communications: FY 1997 implementation plan

    SciTech Connect

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  4. Identifying Critical Pathways to High-Performance PV: Preprint

    SciTech Connect

    Symko-Davies, M.; Noufi, R.; Kurtz, S.

    2002-05-01

    This conference paper describes the High-Performance Photovoltaic (HiPerf PV)Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and our environment in the 21st century. To accomplish this, the NCPV directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. Details of the subcontractor and in-house progress will be described toward identifying critical pathways of 25% polycrystalline thin-film tandem cells and developing multijunction concentrator modules to 33%.

  5. Advances in Experiment Design for High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Morelli, Engene A.

    1998-01-01

    A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.

  6. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  7. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  8. On-Board Fiber-Optic Network Architectures for Radar and Avionics Signal Distribution

    NASA Technical Reports Server (NTRS)

    Alam, Mohammad F.; Atiquzzaman, Mohammed; Duncan, Bradley B.; Nguyen, Hung; Kunath, Richard

    2000-01-01

    Continued progress in both civil and military avionics applications is overstressing the capabilities of existing radio-frequency (RF) communication networks based on coaxial cables on board modem aircrafts. Future avionics systems will require high-bandwidth on- board communication links that are lightweight, immune to electromagnetic interference, and highly reliable. Fiber optic communication technology can meet all these challenges in a cost-effective manner. Recently, digital fiber-optic communication systems, where a fiber-optic network acts like a local area network (LAN) for digital data communications, have become a topic of extensive research and development. Although a fiber-optic system can be designed to transport radio-frequency (RF) signals, the digital fiber-optic systems under development today are not capable of transporting microwave and millimeter-wave RF signals used in radar and avionics systems on board an aircraft. Recent advances in fiber optic technology, especially wavelength division multiplexing (WDM), has opened a number of possibilities for designing on-board fiber optic networks, including all-optical networks for radar and avionics RF signal distribution. In this paper, we investigate a number of different novel approaches for fiber-optic transmission of on-board VHF and UHF RF signals using commercial off-the-shelf (COTS) components. The relative merits and demerits of each architecture are discussed, and the suitability of each architecture for particular applications is pointed out. All-optical approaches show better performance than other traditional approaches in terms of signal-to-noise ratio, power consumption, and weight requirements.

  9. Survey of manufacturers of high-performance heat engines adaptable to solar applications

    NASA Technical Reports Server (NTRS)

    Stine, W. B.

    1984-01-01

    The results of an industry survey made during the summer of 1983 are summarized. The survey was initiated in order to develop an information base on advanced engines that could be used in the solar thermal dish-electric program. Questionnaires inviting responses were sent to 39 companies known to manufacture or integrate externally heated engines. Follow-up telephone communication ensured uniformity of response. It appears from the survey that the technology exists to produce external-heat-addition engines of appropriate size with thermal efficiencies of over 40%. Problem areas are materials and sealing.

  10. Survey of manufacturers of high-performance heat engines adaptable to solar applications

    SciTech Connect

    Stine, W. B.

    1984-06-15

    This report summarizes the results of an industry survey made during the summer of 1983. The survey was initiated in order to develop an information base on advanced engines that could be used in the solar thermal dish-electric program. Questionnaires inviting responses were sent to 39 companies known to manufacture or integrate externally heated engines. Follow-up telephone communication ensured uniformity of response.

  11. High-performance RC bandpass filter is adapted to miniaturized construction

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Miniaturized bandpass filter with RC networks is suitable for use in integrated circuits. The circuit consists of three stages of amplification with additional resistive and capacitive components to obtain the desired characteristics. The advantages of the active RC filter network are the reduction in size and weight and elimination of magnetic materials.

  12. TMD-Based Structural Control of High Performance Steel Bridges

    NASA Astrophysics Data System (ADS)

    Kim, Tae Min; Kim, Gun; Kyum Kim, Moon

    2012-08-01

    The purpose of this study is to investigate the effectiveness of structural control using tuned mass damper (TMD) for suppressing excessive traffic induced vibration of high performance steel bridge. The study considered 1-span steel plate girder bridge and bridge-vehicle interaction using HS-24 truck model. A numerical model of steel plate girder, traffic load, and TMD is constructed and time history analysis is performed using commercial structural analysis program ABAQUS 6.10. Results from analyses show that high performance steel bridge has dynamic serviceability problem, compared to relatively low performance steel bridge. Therefore, the structural control using TMD is implemented in order to alleviate dynamic serviceability problems. TMD is applied to the bridge with high performance steel and then vertical vibration due to dynamic behavior is assessed again. In consequent, by using TMD, it is confirmed that the residual amplitude is appreciably reduced by 85% in steady-state vibration. Moreover, vibration serviceability assessment using 'Reiher-Meister Curve' is also remarkably improved. As a result, this paper provides the guideline for economical design of I-girder using high performance steel and evaluates the effectiveness of structural control using TMD, simultaneously.

  13. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  14. Common Elements of High Performing, High Poverty Middle Schools.

    ERIC Educational Resources Information Center

    Trimble, Susan

    2002-01-01

    Examined over 3 years high-achieving high-poverty middle schools to determine school practices and policies associated with higher student achievement. Found that high-poverty middle schools that are high performing acquire grants and manage money well, use a variety of teaming configurations, and use data-based goals to improve student…

  15. High Performance Computing and Networking for Science--Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The Office of Technology Assessment is conducting an assessment of the effects of new information technologies--including high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives…

  16. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  17. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  18. The role of interpreters in high performance computing

    SciTech Connect

    Naumann, Axel; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  19. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  20. Promoting High-Performance Computing and Communications. A CBO Study.

    ERIC Educational Resources Information Center

    Webre, Philip

    In 1991 the Federal Government initiated the multiagency High Performance Computing and Communications program (HPCC) to further the development of U.S. supercomputer technology and high-speed computer network technology. This overview by the Congressional Budget Office (CBO) concentrates on obstacles that might prevent the growth of the…

  1. A Research and Development Strategy for High Performance Computing.

    ERIC Educational Resources Information Center

    Office of Science and Technology Policy, Washington, DC.

    This report is the result of a systematic review of the status and directions of high performance computing and its relationship to federal research and development. Conducted by the Federal Coordinating Council for Science, Engineering, and Technology (FCCSET), the review involved a series of workshops attended by numerous computer scientists and…

  2. Seeking Solution: High-Performance Computing for Science. Background Paper.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for Science,"…

  3. Maintaining High-Performance Schools after Construction or Renovation

    ERIC Educational Resources Information Center

    Luepke, Gary; Ronsivalli, Louis J., Jr.

    2009-01-01

    With taxpayers' considerable investment in schools, it is critical for school districts to preserve their community's assets with new construction or renovation and effective facility maintenance programs. "High-performance" school buildings are designed to link the physical environment to positive student achievement while providing such benefits…

  4. Neural Correlates of High Performance in Foreign Language Vocabulary Learning

    ERIC Educational Resources Information Center

    Macedonia, Manuela; Muller, Karsten; Friederici, Angela D.

    2010-01-01

    Learning vocabulary in a foreign language is a laborious task which people perform with varying levels of success. Here, we investigated the neural underpinning of high performance on this task. In a within-subjects paradigm, participants learned 92 vocabulary items under two multimodal conditions: one condition paired novel words with iconic…

  5. High Performance Computing and Communications: Toward a National Information Infrastructure.

    ERIC Educational Resources Information Center

    Federal Coordinating Council for Science, Engineering and Technology, Washington, DC.

    This report describes the High Performance Computing and Communications (HPCC) initiative of the Federal Coordinating Council for Science, Engineering, and Technology. This program is supportive of and coordinated with the National Information Infrastructure Initiative. Now halfway through its 5-year effort, the HPCC program counts among its…

  6. Quantification of Tea Flavonoids by High Performance Liquid Chromatography

    ERIC Educational Resources Information Center

    Freeman, Jessica D.; Niemeyer, Emily D.

    2008-01-01

    We have developed a laboratory experiment that uses high performance liquid chromatography (HPLC) to quantify flavonoid levels in a variety of commercial teas. Specifically, this experiment analyzes a group of flavonoids known as catechins, plant-derived polyphenolic compounds commonly found in many foods and beverages, including green and black…

  7. Cobra Strikes! High-Performance Car Inspires Students, Markets Program

    ERIC Educational Resources Information Center

    Jenkins, Bonita

    2008-01-01

    Nestled in the Lower Piedmont region of upstate South Carolina, Piedmont Technical College (PTC) is one of 16 technical colleges in the state. Automotive technology is one of its most popular programs. The program features an instructive, motivating activity that the author describes in this article: building a high-performance car. The Cobra…

  8. The Case for High-Performance, Healthy Green Schools

    ERIC Educational Resources Information Center

    Carter, Leesa

    2011-01-01

    When trying to reach their sustainability goals, schools and school districts often run into obstacles, including financing, training, and implementation tools. Last fall, the U.S. Green Building Council-Georgia (USGBC-Georgia) launched its High Performance, Healthy Schools (HPHS) Program to help Georgia schools overcome those obstacles. By…

  9. 24 CFR 902.71 - Incentives for high performers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... that remain in effect, such as those for competitive bidding or competitive negotiation (see 24 CFR 85... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Incentives for high performers. 902... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Incentives and Remedies § 902.71 Incentives for...

  10. Two Profiles of the Dutch High Performing Employee

    ERIC Educational Resources Information Center

    de Waal, A. A.; Oudshoorn, Michella

    2015-01-01

    Purpose: The purpose of this study is to explore the profile of an ideal employee, to be more precise the behavioral characteristics of the Dutch high-performing employee (HPE). Organizational performance depends for a large part on the commitment of employees. Employees provide their knowledge, skills, experiences and creativity to the…

  11. Implementing High Performance Remote Method Invocation in CCA

    SciTech Connect

    Yin, Jian; Agarwal, Khushbu; Krishnan, Manoj Kumar; Chavarría-Miranda, Daniel; Gorton, Ian; Epperly, Thomas G.

    2011-09-30

    We report our effort in engineering a high performance remote method invocation (RMI) mechanism for the Common Component Architecture (CCA). This mechanism provides a highly efficient and easy-to-use mechanism for distributed computing in CCA, enabling CCA applications to effectively leverage parallel systems to accelerate computations. This work is built on the previous work of Babel RMI. Babel is a high performance language interoperability tool that is used in CCA for scientific application writers to share, reuse, and compose applications from software components written in different programming languages. Babel provides a transparent and flexible RMI framework for distributed computing. However, the existing Babel RMI implementation is built on top of TCP and does not provide the level of performance required to distribute fine-grained tasks. We observed that the main reason the TCP based RMI does not perform well is because it does not utilize the high performance interconnect hardware on a cluster efficiently. We have implemented a high performance RMI protocol, HPCRMI. HPCRMI achieves low latency by building on top of a low-level portable communication library, Aggregated Remote Message Copy Interface (ARMCI), and minimizing communication for each RMI call. Our design allows a RMI operation to be completed by only two RDMA operations. We also aggressively optimize our system to reduce copying. In this paper, we discuss the design and our experimental evaluation of this protocol. Our experimental results show that our protocol can improve RMI performance by an order of magnitude.

  12. National Best Practices Manual for Building High Performance Schools

    ERIC Educational Resources Information Center

    US Department of Energy, 2007

    2007-01-01

    The U.S. Department of Energy's Rebuild America EnergySmart Schools program provides school boards, administrators, and design staff with guidance to help make informed decisions about energy and environmental issues important to school systems and communities. "The National Best Practices Manual for Building High Performance Schools" is a part of…

  13. Guide to School Design: Healthy + High Performance Schools

    ERIC Educational Resources Information Center

    Healthy Schools Network, Inc., 2007

    2007-01-01

    A "healthy and high performance school" uses a holistic design process to promote the health and comfort of children and school employees, as well as conserve resources. Children may spend over eight hours a day at school with little, if any, legal protection from environmental hazards. Schools are generally not well-maintained; asthma is a…

  14. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  15. Expert Noticing and Principals of High-Performing Urban Schools

    ERIC Educational Resources Information Center

    Johnson, Joseph F.; Uline, Cynthia L.; Perez, Lynne G.

    2011-01-01

    In this qualitative study, we applied the concept of expert noticing to instructional leadership in high-performing urban schools that attained both excellent and equitable results for the various diverse populations they served. Unlike the overwhelming majority of urban schools in the United States, in the schools we studied, the academic…

  16. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  17. Understanding the Work and Learning of High Performance Coaches

    ERIC Educational Resources Information Center

    Rynne, Steven B.; Mallett, Cliff J.

    2012-01-01

    Background: The development of high performance sports coaches has been proposed as a major imperative in the professionalization of sports coaching. Accordingly, an increasing body of research is beginning to address the question of how coaches learn. While this is important work, an understanding of how coaches learn must be underpinned by an…

  18. Recruiting, Training, and Retaining High-Performance Development Teams

    ERIC Educational Resources Information Center

    Elder, Stephen D.

    2010-01-01

    This chapter offers thoughts on some key elements of a high-performing development environment. The author describes how good development officers love to be part of something big, something that transforms a place and its people, and that thinking big is a powerful concept for development officers. He reminds development officers to be clear…

  19. Mallow carotenoids determined by high-performance liquid chromatography

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mallow (corchorus olitorius) is a green vegetable, which is widely consumed either fresh or dry by Middle East population. This study was carried out to determine the contents of major carotenoids quantitatively in mallow, by using a High Performance Liquid Chromatography (HPLC) equipped with a Bis...

  20. Achieving High Performance with FPGA-Based Computing

    PubMed Central

    Herbordt, Martin C.; VanCourt, Tom; Gu, Yongfeng; Sukhwani, Bharat; Conti, Al; Model, Josh; DiSabello, Doug

    2011-01-01

    Numerous application areas, including bioinformatics and computational biology, demand increasing amounts of processing capability. In many cases, the computation cores and data types are suited to field-programmable gate arrays. The challenge is identifying the design techniques that can extract high performance potential from the FPGA fabric. PMID:21603088

  1. Training Needs for High Performance in the Automotive Industry.

    ERIC Educational Resources Information Center

    Clyne, Barry; And Others

    A project was conducted in Australia to identify the training needs of the emerging industry required to support the development of the high performance areas of the automotive machining and reconditioning field especially as it pertained to auto racing. Data were gathered through a literature search, interviews with experts in the field, and…

  2. Dynamic Social Networks in High Performance Football Coaching

    ERIC Educational Resources Information Center

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  3. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Diagnostic and remedial methods concerning rotordynamic instability problems in high performance turbomachinery are discussed. Instabilities due to seal forces and work-fluid forces are identified along with those induced by rotor bearing systems. Several methods of rotordynamic control are described including active feedback methods, the use of elastometric elements, and the use of hydrodynamic journal bearings and supports.

  4. Real-Time On-Board Airborne Demonstration of High-Speed On-Board Data Processing for Science Instruments (HOPS)

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  5. Real-time on-board airborne demonstration of high-speed on-board data processing for science instruments (HOPS)

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-05-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  6. Approaches to High-Performance Preparative Chromatography of Proteins

    NASA Astrophysics Data System (ADS)

    Sun, Yan; Liu, Fu-Feng; Shi, Qing-Hong

    Preparative liquid chromatography is widely used for the purification of chemical and biological substances. Different from high-performance liquid chromatography for the analysis of many different components at minimized sample loading, high-performance preparative chromatography is of much larger scale and should be of high resolution and high capacity at high operation speed and low to moderate pressure drop. There are various approaches to this end. For biochemical engineers, the traditional way is to model and optimize a purification process to make it exert its maximum capability. For high-performance separations, however, we need to improve chromatographic technology itself. We herein discuss four approaches in this review, mainly based on the recent studies in our group. The first is the development of high-performance matrices, because packing material is the central component of chromatography. Progress in the fabrication of superporous materials in both beaded and monolithic forms are reviewed. The second topic is the discovery and design of affinity ligands for proteins. In most chromatographic methods, proteins are separated based on their interactions with the ligands attached to the surface of porous media. A target-specific ligand can offer selective purification of desired proteins. Third, electrochromatography is discussed. An electric field applied to a chromatographic column can induce additional separation mechanisms besides chromatography, and result in electrokinetic transport of protein molecules and/or the fluid inside pores, thus leading to high-performance separations. Finally, expanded-bed adsorption is described for process integration to reduce separation steps and process time.

  7. Dynamic neural networks based on-line identification and control of high performance motor drives

    NASA Technical Reports Server (NTRS)

    Rubaai, Ahmed; Kotaru, Raj

    1995-01-01

    In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.

  8. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  9. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  10. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  11. Surveillance study of vector species on board passenger ships, Risk factors related to infestations

    PubMed Central

    Mouchtouri, Varvara A; Anagnostopoulou, Rimma; Samanidou-Voyadjoglou, Anna; Theodoridou, Kalliopi; Hatzoglou, Chrissi; Kremastinou, Jenny; Hadjichristodoulou, Christos

    2008-01-01

    Background Passenger ships provide conditions suitable for the survival and growth of pest populations. Arthropods and rodents can gain access directly from the ships' open spaces, can be carried in shiploads, or can be found on humans or animals as ectoparasites. Vectors on board ships may contaminate stored foods, transmit illness on board, or, introduce diseases in new areas. Pest species, ship areas facilitating infestations, and different risk factors related to infestations were identified in 21 ferries. Methods 486 traps for insects and rodents were placed in 21 ferries. Archives of Public Health Authorities were reviewed to identify complaints regarding the presence of pest species on board ferries from 1994 to 2004. A detail questionnaire was used to collect data on ship characteristics and pest control practices. Results Eighteen ferries were infested with flies (85.7%), 11 with cockroaches (52.3%), three with bedbugs, and one with fleas. Other species had been found on board were ants, spiders, butterflies, beetles, and a lizard. A total of 431 Blattella germanica species were captured in 28 (9.96%) traps, and 84.2% of them were nymphs. One ship was highly infested. Cockroach infestation was negatively associated with ferries in which Hazard Analysis Critical Control Point system was applied to ensure food safety on board (Relative Risk, RR = 0.23, p = 0.03), and positively associated with ferries in which cockroaches were observed by crew (RR = 4.09, p = 0.007), no cockroach monitoring log was kept (RR = 5.00, p = 0.02), and pesticide sprays for domestic use were applied by crew (RR = 4.00, p = 0.05). Cockroach infested ships had higher age (p = 0.03). Neither rats nor mice were found on any ship, but three ferries had been infested with a rodent in the past. Conclusion Integrated pest control programs should include continuing monitoring for a variety of pest species in different ship locations; pest control measures should be more persistent in older

  12. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  13. An Alternative Lunar Ephemeris Model for On-Board Flight Software Use

    NASA Technical Reports Server (NTRS)

    Simpson, David G.

    1998-01-01

    In calculating the position vector of the Moon in on-board flight software, one often begins by using a series expansion to calculate the ecliptic latitude and longitude of the Moon, referred to the mean ecliptic and equinox of date. One then performs a reduction for precession, followed by a rotation of the position vector from the ecliptic plane to the equator, and a transformation from spherical to Cartesian coordinates before finally arriving at the desired result: equatorial J2000 Cartesian components of the lunar position vector. An alternative method is developed here in which the equatorial J2000 Cartesian components of the lunar position vector are calculated directly by a series expansion, saving valuable on-board computer resources.

  14. An on-board TLD system for dose monitoring on the International Space Station.

    PubMed

    Apathy, I; Deme, S; Bodnar, L; Csoke, A; Hejja, I

    1999-01-01

    This institute has developed and manufactured a series of thermoluminescence dosemeter (TLD) systems for spacecraft, consisting of a set of bulb dosemeters and a small, compact, TLD reader suitable for on-board evaluation of the dosemeters. By means of such a system highly accurate measurements were carried out on board the Salyut-6, -7 and Mir Space Stations as well as on the Space Shuttle. A new implementation of the system will be placed on several segments of the ISS as the contribution of Hungary to this intemational enterprise. The well proven CaSO4:Dy dosemeters will be used for routine dosimetry of the astronauts and in biological experiments. The mean LET value will be measured by LiF dosemeters while doses caused by neutrons are planned to be determined by 6LiF/7LiF dosemeter pairs and moderators. A detailed description of the system is given. PMID:11542233

  15. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Mccullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-01-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  16. Comparison of cosmic rays radiation detectors on-board commercial jet aircraft.

    PubMed

    Kubančák, Ján; Ambrožová, Iva; Brabcová, Kateřina Pachnerová; Jakůbek, Jan; Kyselová, Dagmar; Ploc, Ondřej; Bemš, Július; Štěpán, Václav; Uchihori, Yukio

    2015-06-01

    Aircrew members and passengers are exposed to increased rates of cosmic radiation on-board commercial jet aircraft. The annual effective doses of crew members often exceed limits for public, thus it is recommended to monitor them. In general, the doses are estimated via various computer codes and in some countries also verified by measurements. This paper describes a comparison of three cosmic rays detectors, namely of the (a) HAWK Tissue Equivalent Proportional Counter; (b) Liulin semiconductor energy deposit spectrometer and (c) TIMEPIX silicon semiconductor pixel detector, exposed to radiation fields on-board commercial Czech Airlines company jet aircraft. Measurements were performed during passenger flights from Prague to Madrid, Oslo, Tbilisi, Yekaterinburg and Almaty, and back in July and August 2011. For all flights, energy deposit spectra and absorbed doses are presented. Measured absorbed dose and dose equivalent are compared with the EPCARD code calculations. Finally, the advantages and disadvantages of all detectors are discussed. PMID:25979739

  17. On-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1983-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three-dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Previously announced in STAR as N84-16116

  18. On-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1983-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state.

  19. A ground-based memory state tracker for satellite on-board computer memory

    NASA Technical Reports Server (NTRS)

    Quan, Alan; Angelino, Robert; Hill, Michael; Schwuttke, Ursula; Hervias, Felipe

    1993-01-01

    The TOPEX/POSEIDON satellite, currently in Earth orbit, will use radar altimetry to measure sea surface height over 90 percent of the world's ice-free oceans. In combination with a precise determination of the spacecraft orbit, the altimetry data will provide maps of ocean topography, which will be used to calculate the speed and direction of ocean currents worldwide. NASA's Jet Propulsion Laboratory (JPL) has primary responsibility for mission operations for TOPEX/POSEIDON. Software applications have been developed to automate mission operations tasks. This paper describes one of these applications, the Memory State Tracker, which allows the ground analyst to examine and track the contents of satellite on-board computer memory quickly and efficiently, in a human-readable format, without having to receive the data directly from the spacecraft. This process is accomplished by maintaining a groundbased mirror-image of spacecraft On-board Computer memory.

  20. Efficient assessment method of on-board modulation transfer function of optical remote sensing sensors.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2015-03-01

    Modulation transfer function (MTF) can be used to evaluate the imaging performance of on-board optical remote sensing sensors, as well as recover and restore images to improve imaging quality. Laboratory measurement approaches for MTF have achieved high precision. However, they are not yet suitable for on-board measurement. In this paper, a new five-step approach to calculate MTF of space optical remote sensing sensors is proposed. First, a pixel motion model is used to extract the conditional sub-frame images. Second, a mathematical morphology algorithm and a correlation-homomorphic filter algorithm are used to eliminate noise and enhance sub-frame image. Third, an image partial differentiation determines the accurate position of edge points. Fourth, a model optical function is used to build a high-resolution edge spread function. Finally, MTF is calculated by derivation and Fourier transform. The experiment shows that the assessment method of MTF is superior to others. PMID:25836841